00:00:00.001 Started by upstream project "autotest-per-patch" build number 132773 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.085 Using shallow fetch with depth 1 00:00:00.085 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.085 > git --version # timeout=10 00:00:00.126 > git --version # 'git version 2.39.2' 00:00:00.126 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.653 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.675 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.698 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.698 > git config core.sparsecheckout # timeout=10 00:00:02.724 > git read-tree -mu HEAD # timeout=10 00:00:02.744 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.769 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.769 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.907 [Pipeline] Start of Pipeline 00:00:02.922 [Pipeline] library 00:00:02.924 Loading library shm_lib@master 00:00:02.924 Library shm_lib@master is cached. Copying from home. 00:00:02.942 [Pipeline] node 01:01:49.435 Still waiting to schedule task 01:01:49.436 Waiting for next available executor on ‘vagrant-vm-host’ 01:14:35.503 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 01:14:35.505 [Pipeline] { 01:14:35.516 [Pipeline] catchError 01:14:35.519 [Pipeline] { 01:14:35.531 [Pipeline] wrap 01:14:35.540 [Pipeline] { 01:14:35.547 [Pipeline] stage 01:14:35.548 [Pipeline] { (Prologue) 01:14:35.566 [Pipeline] echo 01:14:35.568 Node: VM-host-SM0 01:14:35.573 [Pipeline] cleanWs 01:14:35.580 [WS-CLEANUP] Deleting project workspace... 01:14:35.580 [WS-CLEANUP] Deferred wipeout is used... 01:14:35.585 [WS-CLEANUP] done 01:14:35.853 [Pipeline] setCustomBuildProperty 01:14:35.949 [Pipeline] httpRequest 01:14:36.352 [Pipeline] echo 01:14:36.354 Sorcerer 10.211.164.101 is alive 01:14:36.364 [Pipeline] retry 01:14:36.366 [Pipeline] { 01:14:36.380 [Pipeline] httpRequest 01:14:36.384 HttpMethod: GET 01:14:36.385 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:14:36.385 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:14:36.387 Response Code: HTTP/1.1 200 OK 01:14:36.387 Success: Status code 200 is in the accepted range: 200,404 01:14:36.387 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:14:36.533 [Pipeline] } 01:14:36.550 [Pipeline] // retry 01:14:36.557 [Pipeline] sh 01:14:36.836 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:14:36.856 [Pipeline] httpRequest 01:14:37.260 [Pipeline] echo 01:14:37.262 Sorcerer 10.211.164.101 is alive 01:14:37.272 [Pipeline] retry 01:14:37.274 [Pipeline] { 01:14:37.289 [Pipeline] httpRequest 01:14:37.295 HttpMethod: GET 01:14:37.295 URL: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:14:37.296 Sending request to url: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:14:37.297 Response Code: HTTP/1.1 200 OK 01:14:37.297 Success: Status code 200 is in the accepted range: 200,404 01:14:37.298 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:14:39.565 [Pipeline] } 01:14:39.583 [Pipeline] // retry 01:14:39.591 [Pipeline] sh 01:14:39.873 + tar --no-same-owner -xf spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:14:42.420 [Pipeline] sh 01:14:42.702 + git -C spdk log --oneline -n5 01:14:42.702 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:14:42.702 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 01:14:42.702 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:14:42.702 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:14:42.702 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 01:14:42.730 [Pipeline] writeFile 01:14:42.750 [Pipeline] sh 01:14:43.036 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 01:14:43.049 [Pipeline] sh 01:14:43.330 + cat autorun-spdk.conf 01:14:43.330 SPDK_RUN_FUNCTIONAL_TEST=1 01:14:43.330 SPDK_RUN_ASAN=1 01:14:43.330 SPDK_RUN_UBSAN=1 01:14:43.330 SPDK_TEST_RAID=1 01:14:43.330 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:14:43.337 RUN_NIGHTLY=0 01:14:43.340 [Pipeline] } 01:14:43.358 [Pipeline] // stage 01:14:43.379 [Pipeline] stage 01:14:43.383 [Pipeline] { (Run VM) 01:14:43.397 [Pipeline] sh 01:14:43.686 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 01:14:43.686 + echo 'Start stage prepare_nvme.sh' 01:14:43.686 Start stage prepare_nvme.sh 01:14:43.686 + [[ -n 5 ]] 01:14:43.686 + disk_prefix=ex5 01:14:43.686 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 01:14:43.686 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 01:14:43.686 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 01:14:43.686 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:14:43.686 ++ SPDK_RUN_ASAN=1 01:14:43.686 ++ SPDK_RUN_UBSAN=1 01:14:43.686 ++ SPDK_TEST_RAID=1 01:14:43.686 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:14:43.686 ++ RUN_NIGHTLY=0 01:14:43.686 + cd /var/jenkins/workspace/raid-vg-autotest 01:14:43.686 + nvme_files=() 01:14:43.686 + declare -A nvme_files 01:14:43.686 + backend_dir=/var/lib/libvirt/images/backends 01:14:43.686 + nvme_files['nvme.img']=5G 01:14:43.686 + nvme_files['nvme-cmb.img']=5G 01:14:43.686 + nvme_files['nvme-multi0.img']=4G 01:14:43.686 + nvme_files['nvme-multi1.img']=4G 01:14:43.686 + nvme_files['nvme-multi2.img']=4G 01:14:43.686 + nvme_files['nvme-openstack.img']=8G 01:14:43.686 + nvme_files['nvme-zns.img']=5G 01:14:43.686 + (( SPDK_TEST_NVME_PMR == 1 )) 01:14:43.686 + (( SPDK_TEST_FTL == 1 )) 01:14:43.686 + (( SPDK_TEST_NVME_FDP == 1 )) 01:14:43.686 + [[ ! -d /var/lib/libvirt/images/backends ]] 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 01:14:43.686 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 01:14:43.686 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 01:14:43.686 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 01:14:43.686 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 01:14:43.686 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 01:14:43.686 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 01:14:43.686 + for nvme in "${!nvme_files[@]}" 01:14:43.686 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 01:14:43.946 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 01:14:43.946 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 01:14:43.946 + echo 'End stage prepare_nvme.sh' 01:14:43.946 End stage prepare_nvme.sh 01:14:43.955 [Pipeline] sh 01:14:44.232 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 01:14:44.232 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 01:14:44.232 01:14:44.232 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 01:14:44.232 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 01:14:44.232 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 01:14:44.232 HELP=0 01:14:44.232 DRY_RUN=0 01:14:44.232 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 01:14:44.232 NVME_DISKS_TYPE=nvme,nvme, 01:14:44.232 NVME_AUTO_CREATE=0 01:14:44.232 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 01:14:44.232 NVME_CMB=,, 01:14:44.232 NVME_PMR=,, 01:14:44.232 NVME_ZNS=,, 01:14:44.232 NVME_MS=,, 01:14:44.232 NVME_FDP=,, 01:14:44.232 SPDK_VAGRANT_DISTRO=fedora39 01:14:44.232 SPDK_VAGRANT_VMCPU=10 01:14:44.232 SPDK_VAGRANT_VMRAM=12288 01:14:44.232 SPDK_VAGRANT_PROVIDER=libvirt 01:14:44.232 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 01:14:44.232 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 01:14:44.232 SPDK_OPENSTACK_NETWORK=0 01:14:44.232 VAGRANT_PACKAGE_BOX=0 01:14:44.232 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 01:14:44.232 FORCE_DISTRO=true 01:14:44.232 VAGRANT_BOX_VERSION= 01:14:44.232 EXTRA_VAGRANTFILES= 01:14:44.232 NIC_MODEL=e1000 01:14:44.232 01:14:44.232 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 01:14:44.232 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 01:14:46.813 Bringing machine 'default' up with 'libvirt' provider... 01:14:47.747 ==> default: Creating image (snapshot of base box volume). 01:14:47.747 ==> default: Creating domain with the following settings... 01:14:47.747 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733720978_b120ae1e734b95c1df98 01:14:47.747 ==> default: -- Domain type: kvm 01:14:47.747 ==> default: -- Cpus: 10 01:14:47.747 ==> default: -- Feature: acpi 01:14:47.747 ==> default: -- Feature: apic 01:14:47.747 ==> default: -- Feature: pae 01:14:47.747 ==> default: -- Memory: 12288M 01:14:47.747 ==> default: -- Memory Backing: hugepages: 01:14:47.747 ==> default: -- Management MAC: 01:14:47.747 ==> default: -- Loader: 01:14:47.747 ==> default: -- Nvram: 01:14:47.747 ==> default: -- Base box: spdk/fedora39 01:14:47.747 ==> default: -- Storage pool: default 01:14:47.747 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733720978_b120ae1e734b95c1df98.img (20G) 01:14:47.747 ==> default: -- Volume Cache: default 01:14:47.747 ==> default: -- Kernel: 01:14:47.747 ==> default: -- Initrd: 01:14:47.747 ==> default: -- Graphics Type: vnc 01:14:47.747 ==> default: -- Graphics Port: -1 01:14:47.747 ==> default: -- Graphics IP: 127.0.0.1 01:14:47.747 ==> default: -- Graphics Password: Not defined 01:14:47.747 ==> default: -- Video Type: cirrus 01:14:47.747 ==> default: -- Video VRAM: 9216 01:14:47.747 ==> default: -- Sound Type: 01:14:47.747 ==> default: -- Keymap: en-us 01:14:47.747 ==> default: -- TPM Path: 01:14:47.747 ==> default: -- INPUT: type=mouse, bus=ps2 01:14:47.747 ==> default: -- Command line args: 01:14:47.747 ==> default: -> value=-device, 01:14:47.747 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 01:14:47.747 ==> default: -> value=-drive, 01:14:47.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 01:14:47.747 ==> default: -> value=-device, 01:14:47.747 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:14:47.747 ==> default: -> value=-device, 01:14:47.747 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 01:14:47.747 ==> default: -> value=-drive, 01:14:47.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 01:14:47.747 ==> default: -> value=-device, 01:14:47.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:14:47.747 ==> default: -> value=-drive, 01:14:47.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 01:14:47.747 ==> default: -> value=-device, 01:14:47.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:14:47.747 ==> default: -> value=-drive, 01:14:47.748 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 01:14:47.748 ==> default: -> value=-device, 01:14:47.748 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:14:48.006 ==> default: Creating shared folders metadata... 01:14:48.006 ==> default: Starting domain. 01:14:49.918 ==> default: Waiting for domain to get an IP address... 01:15:07.997 ==> default: Waiting for SSH to become available... 01:15:07.997 ==> default: Configuring and enabling network interfaces... 01:15:10.531 default: SSH address: 192.168.121.136:22 01:15:10.531 default: SSH username: vagrant 01:15:10.531 default: SSH auth method: private key 01:15:12.436 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 01:15:20.557 ==> default: Mounting SSHFS shared folder... 01:15:21.936 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 01:15:21.936 ==> default: Checking Mount.. 01:15:23.314 ==> default: Folder Successfully Mounted! 01:15:23.314 ==> default: Running provisioner: file... 01:15:23.882 default: ~/.gitconfig => .gitconfig 01:15:24.448 01:15:24.448 SUCCESS! 01:15:24.448 01:15:24.448 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 01:15:24.448 Use vagrant "suspend" and vagrant "resume" to stop and start. 01:15:24.448 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 01:15:24.448 01:15:24.456 [Pipeline] } 01:15:24.472 [Pipeline] // stage 01:15:24.480 [Pipeline] dir 01:15:24.481 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 01:15:24.482 [Pipeline] { 01:15:24.494 [Pipeline] catchError 01:15:24.496 [Pipeline] { 01:15:24.508 [Pipeline] sh 01:15:24.785 + vagrant ssh-config --host vagrant 01:15:24.785 + sed -ne /^Host/,$p 01:15:24.785 + tee ssh_conf 01:15:28.064 Host vagrant 01:15:28.064 HostName 192.168.121.136 01:15:28.064 User vagrant 01:15:28.064 Port 22 01:15:28.064 UserKnownHostsFile /dev/null 01:15:28.064 StrictHostKeyChecking no 01:15:28.064 PasswordAuthentication no 01:15:28.064 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 01:15:28.064 IdentitiesOnly yes 01:15:28.064 LogLevel FATAL 01:15:28.064 ForwardAgent yes 01:15:28.064 ForwardX11 yes 01:15:28.064 01:15:28.078 [Pipeline] withEnv 01:15:28.081 [Pipeline] { 01:15:28.096 [Pipeline] sh 01:15:28.379 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 01:15:28.379 source /etc/os-release 01:15:28.379 [[ -e /image.version ]] && img=$(< /image.version) 01:15:28.379 # Minimal, systemd-like check. 01:15:28.379 if [[ -e /.dockerenv ]]; then 01:15:28.379 # Clear garbage from the node's name: 01:15:28.379 # agt-er_autotest_547-896 -> autotest_547-896 01:15:28.379 # $HOSTNAME is the actual container id 01:15:28.379 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 01:15:28.379 if grep -q "/etc/hostname" /proc/self/mountinfo; then 01:15:28.379 # We can assume this is a mount from a host where container is running, 01:15:28.379 # so fetch its hostname to easily identify the target swarm worker. 01:15:28.379 container="$(< /etc/hostname) ($agent)" 01:15:28.379 else 01:15:28.379 # Fallback 01:15:28.379 container=$agent 01:15:28.379 fi 01:15:28.379 fi 01:15:28.379 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 01:15:28.379 01:15:28.710 [Pipeline] } 01:15:28.728 [Pipeline] // withEnv 01:15:28.736 [Pipeline] setCustomBuildProperty 01:15:28.750 [Pipeline] stage 01:15:28.753 [Pipeline] { (Tests) 01:15:28.770 [Pipeline] sh 01:15:29.050 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 01:15:29.323 [Pipeline] sh 01:15:29.601 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 01:15:29.872 [Pipeline] timeout 01:15:29.873 Timeout set to expire in 1 hr 30 min 01:15:29.875 [Pipeline] { 01:15:29.887 [Pipeline] sh 01:15:30.164 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 01:15:30.730 HEAD is now at 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:15:30.743 [Pipeline] sh 01:15:31.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 01:15:31.296 [Pipeline] sh 01:15:31.576 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 01:15:31.850 [Pipeline] sh 01:15:32.135 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 01:15:32.393 ++ readlink -f spdk_repo 01:15:32.393 + DIR_ROOT=/home/vagrant/spdk_repo 01:15:32.393 + [[ -n /home/vagrant/spdk_repo ]] 01:15:32.394 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 01:15:32.394 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 01:15:32.394 + [[ -d /home/vagrant/spdk_repo/spdk ]] 01:15:32.394 + [[ ! -d /home/vagrant/spdk_repo/output ]] 01:15:32.394 + [[ -d /home/vagrant/spdk_repo/output ]] 01:15:32.394 + [[ raid-vg-autotest == pkgdep-* ]] 01:15:32.394 + cd /home/vagrant/spdk_repo 01:15:32.394 + source /etc/os-release 01:15:32.394 ++ NAME='Fedora Linux' 01:15:32.394 ++ VERSION='39 (Cloud Edition)' 01:15:32.394 ++ ID=fedora 01:15:32.394 ++ VERSION_ID=39 01:15:32.394 ++ VERSION_CODENAME= 01:15:32.394 ++ PLATFORM_ID=platform:f39 01:15:32.394 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 01:15:32.394 ++ ANSI_COLOR='0;38;2;60;110;180' 01:15:32.394 ++ LOGO=fedora-logo-icon 01:15:32.394 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 01:15:32.394 ++ HOME_URL=https://fedoraproject.org/ 01:15:32.394 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 01:15:32.394 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 01:15:32.394 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 01:15:32.394 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 01:15:32.394 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 01:15:32.394 ++ REDHAT_SUPPORT_PRODUCT=Fedora 01:15:32.394 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 01:15:32.394 ++ SUPPORT_END=2024-11-12 01:15:32.394 ++ VARIANT='Cloud Edition' 01:15:32.394 ++ VARIANT_ID=cloud 01:15:32.394 + uname -a 01:15:32.394 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 01:15:32.394 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:15:32.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:32.653 Hugepages 01:15:32.653 node hugesize free / total 01:15:32.653 node0 1048576kB 0 / 0 01:15:32.653 node0 2048kB 0 / 0 01:15:32.653 01:15:32.653 Type BDF Vendor Device NUMA Driver Device Block devices 01:15:32.913 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:15:32.913 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:15:32.913 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 01:15:32.913 + rm -f /tmp/spdk-ld-path 01:15:32.913 + source autorun-spdk.conf 01:15:32.913 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:15:32.913 ++ SPDK_RUN_ASAN=1 01:15:32.913 ++ SPDK_RUN_UBSAN=1 01:15:32.913 ++ SPDK_TEST_RAID=1 01:15:32.913 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:15:32.913 ++ RUN_NIGHTLY=0 01:15:32.913 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 01:15:32.913 + [[ -n '' ]] 01:15:32.913 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 01:15:32.913 + for M in /var/spdk/build-*-manifest.txt 01:15:32.913 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 01:15:32.913 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 01:15:32.913 + for M in /var/spdk/build-*-manifest.txt 01:15:32.913 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 01:15:32.913 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 01:15:32.913 + for M in /var/spdk/build-*-manifest.txt 01:15:32.913 + [[ -f /var/spdk/build-repo-manifest.txt ]] 01:15:32.913 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 01:15:32.913 ++ uname 01:15:32.913 + [[ Linux == \L\i\n\u\x ]] 01:15:32.913 + sudo dmesg -T 01:15:32.913 + sudo dmesg --clear 01:15:32.913 + dmesg_pid=5266 01:15:32.913 + [[ Fedora Linux == FreeBSD ]] 01:15:32.913 + sudo dmesg -Tw 01:15:32.913 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:15:32.913 + UNBIND_ENTIRE_IOMMU_GROUP=yes 01:15:32.913 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 01:15:32.913 + [[ -x /usr/src/fio-static/fio ]] 01:15:32.913 + export FIO_BIN=/usr/src/fio-static/fio 01:15:32.913 + FIO_BIN=/usr/src/fio-static/fio 01:15:32.913 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 01:15:32.913 + [[ ! -v VFIO_QEMU_BIN ]] 01:15:32.913 + [[ -e /usr/local/qemu/vfio-user-latest ]] 01:15:32.913 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:15:32.913 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:15:32.913 + [[ -e /usr/local/qemu/vanilla-latest ]] 01:15:32.913 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:15:32.913 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:15:32.913 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:15:33.172 05:10:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:15:33.172 05:10:24 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 01:15:33.172 05:10:24 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 01:15:33.172 05:10:24 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 01:15:33.172 05:10:24 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 01:15:33.172 05:10:24 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 01:15:33.172 05:10:24 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:15:33.172 05:10:24 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 01:15:33.172 05:10:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 01:15:33.172 05:10:24 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:15:33.172 05:10:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:15:33.173 05:10:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:33.173 05:10:24 -- scripts/common.sh@15 -- $ shopt -s extglob 01:15:33.173 05:10:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 01:15:33.173 05:10:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:33.173 05:10:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:33.173 05:10:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:33.173 05:10:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:33.173 05:10:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:33.173 05:10:24 -- paths/export.sh@5 -- $ export PATH 01:15:33.173 05:10:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:33.173 05:10:24 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:15:33.173 05:10:24 -- common/autobuild_common.sh@493 -- $ date +%s 01:15:33.173 05:10:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733721024.XXXXXX 01:15:33.173 05:10:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733721024.ui6gJ8 01:15:33.173 05:10:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 01:15:33.173 05:10:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 01:15:33.173 05:10:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 01:15:33.173 05:10:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:15:33.173 05:10:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:15:33.173 05:10:24 -- common/autobuild_common.sh@509 -- $ get_config_params 01:15:33.173 05:10:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 01:15:33.173 05:10:24 -- common/autotest_common.sh@10 -- $ set +x 01:15:33.173 05:10:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 01:15:33.173 05:10:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 01:15:33.173 05:10:24 -- pm/common@17 -- $ local monitor 01:15:33.173 05:10:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:15:33.173 05:10:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:15:33.173 05:10:24 -- pm/common@25 -- $ sleep 1 01:15:33.173 05:10:24 -- pm/common@21 -- $ date +%s 01:15:33.173 05:10:24 -- pm/common@21 -- $ date +%s 01:15:33.173 05:10:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721024 01:15:33.173 05:10:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733721024 01:15:33.173 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721024_collect-cpu-load.pm.log 01:15:33.173 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733721024_collect-vmstat.pm.log 01:15:34.113 05:10:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 01:15:34.113 05:10:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 01:15:34.113 05:10:25 -- spdk/autobuild.sh@12 -- $ umask 022 01:15:34.113 05:10:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 01:15:34.113 05:10:25 -- spdk/autobuild.sh@16 -- $ date -u 01:15:34.113 Mon Dec 9 05:10:25 AM UTC 2024 01:15:34.113 05:10:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 01:15:34.113 v25.01-pre-278-g66902d69a 01:15:34.113 05:10:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 01:15:34.113 05:10:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 01:15:34.113 05:10:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:15:34.113 05:10:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:15:34.113 05:10:25 -- common/autotest_common.sh@10 -- $ set +x 01:15:34.113 ************************************ 01:15:34.113 START TEST asan 01:15:34.113 ************************************ 01:15:34.113 using asan 01:15:34.113 05:10:25 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 01:15:34.113 01:15:34.113 real 0m0.000s 01:15:34.113 user 0m0.000s 01:15:34.113 sys 0m0.000s 01:15:34.113 ************************************ 01:15:34.113 END TEST asan 01:15:34.113 ************************************ 01:15:34.113 05:10:25 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:15:34.113 05:10:25 asan -- common/autotest_common.sh@10 -- $ set +x 01:15:34.113 05:10:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 01:15:34.113 05:10:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 01:15:34.113 05:10:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:15:34.113 05:10:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:15:34.113 05:10:25 -- common/autotest_common.sh@10 -- $ set +x 01:15:34.113 ************************************ 01:15:34.113 START TEST ubsan 01:15:34.113 ************************************ 01:15:34.113 using ubsan 01:15:34.113 05:10:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 01:15:34.113 01:15:34.113 real 0m0.000s 01:15:34.113 user 0m0.000s 01:15:34.113 sys 0m0.000s 01:15:34.113 05:10:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:15:34.113 05:10:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 01:15:34.113 ************************************ 01:15:34.113 END TEST ubsan 01:15:34.113 ************************************ 01:15:34.371 05:10:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 01:15:34.371 05:10:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 01:15:34.371 05:10:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 01:15:34.371 05:10:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 01:15:34.371 05:10:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 01:15:34.371 05:10:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 01:15:34.371 05:10:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 01:15:34.371 05:10:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 01:15:34.371 05:10:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 01:15:34.371 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:15:34.371 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 01:15:34.937 Using 'verbs' RDMA provider 01:15:50.810 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 01:16:03.017 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 01:16:03.017 Creating mk/config.mk...done. 01:16:03.017 Creating mk/cc.flags.mk...done. 01:16:03.017 Type 'make' to build. 01:16:03.017 05:10:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 01:16:03.017 05:10:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:16:03.017 05:10:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:16:03.017 05:10:53 -- common/autotest_common.sh@10 -- $ set +x 01:16:03.017 ************************************ 01:16:03.017 START TEST make 01:16:03.017 ************************************ 01:16:03.017 05:10:53 make -- common/autotest_common.sh@1129 -- $ make -j10 01:16:03.017 make[1]: Nothing to be done for 'all'. 01:16:15.244 The Meson build system 01:16:15.244 Version: 1.5.0 01:16:15.244 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 01:16:15.244 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 01:16:15.244 Build type: native build 01:16:15.244 Program cat found: YES (/usr/bin/cat) 01:16:15.244 Project name: DPDK 01:16:15.244 Project version: 24.03.0 01:16:15.244 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:16:15.244 C linker for the host machine: cc ld.bfd 2.40-14 01:16:15.244 Host machine cpu family: x86_64 01:16:15.244 Host machine cpu: x86_64 01:16:15.244 Message: ## Building in Developer Mode ## 01:16:15.244 Program pkg-config found: YES (/usr/bin/pkg-config) 01:16:15.244 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 01:16:15.244 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 01:16:15.244 Program python3 found: YES (/usr/bin/python3) 01:16:15.244 Program cat found: YES (/usr/bin/cat) 01:16:15.244 Compiler for C supports arguments -march=native: YES 01:16:15.244 Checking for size of "void *" : 8 01:16:15.244 Checking for size of "void *" : 8 (cached) 01:16:15.244 Compiler for C supports link arguments -Wl,--undefined-version: YES 01:16:15.244 Library m found: YES 01:16:15.244 Library numa found: YES 01:16:15.244 Has header "numaif.h" : YES 01:16:15.244 Library fdt found: NO 01:16:15.244 Library execinfo found: NO 01:16:15.244 Has header "execinfo.h" : YES 01:16:15.244 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:16:15.244 Run-time dependency libarchive found: NO (tried pkgconfig) 01:16:15.244 Run-time dependency libbsd found: NO (tried pkgconfig) 01:16:15.244 Run-time dependency jansson found: NO (tried pkgconfig) 01:16:15.244 Run-time dependency openssl found: YES 3.1.1 01:16:15.244 Run-time dependency libpcap found: YES 1.10.4 01:16:15.244 Has header "pcap.h" with dependency libpcap: YES 01:16:15.244 Compiler for C supports arguments -Wcast-qual: YES 01:16:15.244 Compiler for C supports arguments -Wdeprecated: YES 01:16:15.244 Compiler for C supports arguments -Wformat: YES 01:16:15.244 Compiler for C supports arguments -Wformat-nonliteral: NO 01:16:15.244 Compiler for C supports arguments -Wformat-security: NO 01:16:15.244 Compiler for C supports arguments -Wmissing-declarations: YES 01:16:15.244 Compiler for C supports arguments -Wmissing-prototypes: YES 01:16:15.244 Compiler for C supports arguments -Wnested-externs: YES 01:16:15.244 Compiler for C supports arguments -Wold-style-definition: YES 01:16:15.244 Compiler for C supports arguments -Wpointer-arith: YES 01:16:15.244 Compiler for C supports arguments -Wsign-compare: YES 01:16:15.244 Compiler for C supports arguments -Wstrict-prototypes: YES 01:16:15.244 Compiler for C supports arguments -Wundef: YES 01:16:15.244 Compiler for C supports arguments -Wwrite-strings: YES 01:16:15.244 Compiler for C supports arguments -Wno-address-of-packed-member: YES 01:16:15.244 Compiler for C supports arguments -Wno-packed-not-aligned: YES 01:16:15.244 Compiler for C supports arguments -Wno-missing-field-initializers: YES 01:16:15.244 Compiler for C supports arguments -Wno-zero-length-bounds: YES 01:16:15.244 Program objdump found: YES (/usr/bin/objdump) 01:16:15.244 Compiler for C supports arguments -mavx512f: YES 01:16:15.244 Checking if "AVX512 checking" compiles: YES 01:16:15.244 Fetching value of define "__SSE4_2__" : 1 01:16:15.244 Fetching value of define "__AES__" : 1 01:16:15.244 Fetching value of define "__AVX__" : 1 01:16:15.244 Fetching value of define "__AVX2__" : 1 01:16:15.244 Fetching value of define "__AVX512BW__" : (undefined) 01:16:15.244 Fetching value of define "__AVX512CD__" : (undefined) 01:16:15.244 Fetching value of define "__AVX512DQ__" : (undefined) 01:16:15.244 Fetching value of define "__AVX512F__" : (undefined) 01:16:15.244 Fetching value of define "__AVX512VL__" : (undefined) 01:16:15.244 Fetching value of define "__PCLMUL__" : 1 01:16:15.244 Fetching value of define "__RDRND__" : 1 01:16:15.244 Fetching value of define "__RDSEED__" : 1 01:16:15.244 Fetching value of define "__VPCLMULQDQ__" : (undefined) 01:16:15.244 Fetching value of define "__znver1__" : (undefined) 01:16:15.244 Fetching value of define "__znver2__" : (undefined) 01:16:15.244 Fetching value of define "__znver3__" : (undefined) 01:16:15.244 Fetching value of define "__znver4__" : (undefined) 01:16:15.244 Library asan found: YES 01:16:15.244 Compiler for C supports arguments -Wno-format-truncation: YES 01:16:15.244 Message: lib/log: Defining dependency "log" 01:16:15.244 Message: lib/kvargs: Defining dependency "kvargs" 01:16:15.244 Message: lib/telemetry: Defining dependency "telemetry" 01:16:15.244 Library rt found: YES 01:16:15.244 Checking for function "getentropy" : NO 01:16:15.244 Message: lib/eal: Defining dependency "eal" 01:16:15.244 Message: lib/ring: Defining dependency "ring" 01:16:15.244 Message: lib/rcu: Defining dependency "rcu" 01:16:15.244 Message: lib/mempool: Defining dependency "mempool" 01:16:15.244 Message: lib/mbuf: Defining dependency "mbuf" 01:16:15.244 Fetching value of define "__PCLMUL__" : 1 (cached) 01:16:15.244 Fetching value of define "__AVX512F__" : (undefined) (cached) 01:16:15.244 Compiler for C supports arguments -mpclmul: YES 01:16:15.244 Compiler for C supports arguments -maes: YES 01:16:15.244 Compiler for C supports arguments -mavx512f: YES (cached) 01:16:15.244 Compiler for C supports arguments -mavx512bw: YES 01:16:15.244 Compiler for C supports arguments -mavx512dq: YES 01:16:15.244 Compiler for C supports arguments -mavx512vl: YES 01:16:15.244 Compiler for C supports arguments -mvpclmulqdq: YES 01:16:15.244 Compiler for C supports arguments -mavx2: YES 01:16:15.244 Compiler for C supports arguments -mavx: YES 01:16:15.244 Message: lib/net: Defining dependency "net" 01:16:15.244 Message: lib/meter: Defining dependency "meter" 01:16:15.244 Message: lib/ethdev: Defining dependency "ethdev" 01:16:15.244 Message: lib/pci: Defining dependency "pci" 01:16:15.244 Message: lib/cmdline: Defining dependency "cmdline" 01:16:15.244 Message: lib/hash: Defining dependency "hash" 01:16:15.244 Message: lib/timer: Defining dependency "timer" 01:16:15.244 Message: lib/compressdev: Defining dependency "compressdev" 01:16:15.244 Message: lib/cryptodev: Defining dependency "cryptodev" 01:16:15.244 Message: lib/dmadev: Defining dependency "dmadev" 01:16:15.244 Compiler for C supports arguments -Wno-cast-qual: YES 01:16:15.244 Message: lib/power: Defining dependency "power" 01:16:15.244 Message: lib/reorder: Defining dependency "reorder" 01:16:15.244 Message: lib/security: Defining dependency "security" 01:16:15.244 Has header "linux/userfaultfd.h" : YES 01:16:15.244 Has header "linux/vduse.h" : YES 01:16:15.244 Message: lib/vhost: Defining dependency "vhost" 01:16:15.244 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 01:16:15.244 Message: drivers/bus/pci: Defining dependency "bus_pci" 01:16:15.244 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 01:16:15.244 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 01:16:15.244 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 01:16:15.244 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 01:16:15.245 Message: Disabling ml/* drivers: missing internal dependency "mldev" 01:16:15.245 Message: Disabling event/* drivers: missing internal dependency "eventdev" 01:16:15.245 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 01:16:15.245 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 01:16:15.245 Program doxygen found: YES (/usr/local/bin/doxygen) 01:16:15.245 Configuring doxy-api-html.conf using configuration 01:16:15.245 Configuring doxy-api-man.conf using configuration 01:16:15.245 Program mandb found: YES (/usr/bin/mandb) 01:16:15.245 Program sphinx-build found: NO 01:16:15.245 Configuring rte_build_config.h using configuration 01:16:15.245 Message: 01:16:15.245 ================= 01:16:15.245 Applications Enabled 01:16:15.245 ================= 01:16:15.245 01:16:15.245 apps: 01:16:15.245 01:16:15.245 01:16:15.245 Message: 01:16:15.245 ================= 01:16:15.245 Libraries Enabled 01:16:15.245 ================= 01:16:15.245 01:16:15.245 libs: 01:16:15.245 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 01:16:15.245 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 01:16:15.245 cryptodev, dmadev, power, reorder, security, vhost, 01:16:15.245 01:16:15.245 Message: 01:16:15.245 =============== 01:16:15.245 Drivers Enabled 01:16:15.245 =============== 01:16:15.245 01:16:15.245 common: 01:16:15.245 01:16:15.245 bus: 01:16:15.245 pci, vdev, 01:16:15.245 mempool: 01:16:15.245 ring, 01:16:15.245 dma: 01:16:15.245 01:16:15.245 net: 01:16:15.245 01:16:15.245 crypto: 01:16:15.245 01:16:15.245 compress: 01:16:15.245 01:16:15.245 vdpa: 01:16:15.245 01:16:15.245 01:16:15.245 Message: 01:16:15.245 ================= 01:16:15.245 Content Skipped 01:16:15.245 ================= 01:16:15.245 01:16:15.245 apps: 01:16:15.245 dumpcap: explicitly disabled via build config 01:16:15.245 graph: explicitly disabled via build config 01:16:15.245 pdump: explicitly disabled via build config 01:16:15.245 proc-info: explicitly disabled via build config 01:16:15.245 test-acl: explicitly disabled via build config 01:16:15.245 test-bbdev: explicitly disabled via build config 01:16:15.245 test-cmdline: explicitly disabled via build config 01:16:15.245 test-compress-perf: explicitly disabled via build config 01:16:15.245 test-crypto-perf: explicitly disabled via build config 01:16:15.245 test-dma-perf: explicitly disabled via build config 01:16:15.245 test-eventdev: explicitly disabled via build config 01:16:15.245 test-fib: explicitly disabled via build config 01:16:15.245 test-flow-perf: explicitly disabled via build config 01:16:15.245 test-gpudev: explicitly disabled via build config 01:16:15.245 test-mldev: explicitly disabled via build config 01:16:15.245 test-pipeline: explicitly disabled via build config 01:16:15.245 test-pmd: explicitly disabled via build config 01:16:15.245 test-regex: explicitly disabled via build config 01:16:15.245 test-sad: explicitly disabled via build config 01:16:15.245 test-security-perf: explicitly disabled via build config 01:16:15.245 01:16:15.245 libs: 01:16:15.245 argparse: explicitly disabled via build config 01:16:15.245 metrics: explicitly disabled via build config 01:16:15.245 acl: explicitly disabled via build config 01:16:15.245 bbdev: explicitly disabled via build config 01:16:15.245 bitratestats: explicitly disabled via build config 01:16:15.245 bpf: explicitly disabled via build config 01:16:15.245 cfgfile: explicitly disabled via build config 01:16:15.245 distributor: explicitly disabled via build config 01:16:15.245 efd: explicitly disabled via build config 01:16:15.245 eventdev: explicitly disabled via build config 01:16:15.245 dispatcher: explicitly disabled via build config 01:16:15.245 gpudev: explicitly disabled via build config 01:16:15.245 gro: explicitly disabled via build config 01:16:15.245 gso: explicitly disabled via build config 01:16:15.245 ip_frag: explicitly disabled via build config 01:16:15.245 jobstats: explicitly disabled via build config 01:16:15.245 latencystats: explicitly disabled via build config 01:16:15.245 lpm: explicitly disabled via build config 01:16:15.245 member: explicitly disabled via build config 01:16:15.245 pcapng: explicitly disabled via build config 01:16:15.245 rawdev: explicitly disabled via build config 01:16:15.245 regexdev: explicitly disabled via build config 01:16:15.245 mldev: explicitly disabled via build config 01:16:15.245 rib: explicitly disabled via build config 01:16:15.245 sched: explicitly disabled via build config 01:16:15.245 stack: explicitly disabled via build config 01:16:15.245 ipsec: explicitly disabled via build config 01:16:15.245 pdcp: explicitly disabled via build config 01:16:15.245 fib: explicitly disabled via build config 01:16:15.245 port: explicitly disabled via build config 01:16:15.245 pdump: explicitly disabled via build config 01:16:15.245 table: explicitly disabled via build config 01:16:15.245 pipeline: explicitly disabled via build config 01:16:15.245 graph: explicitly disabled via build config 01:16:15.245 node: explicitly disabled via build config 01:16:15.245 01:16:15.245 drivers: 01:16:15.245 common/cpt: not in enabled drivers build config 01:16:15.245 common/dpaax: not in enabled drivers build config 01:16:15.245 common/iavf: not in enabled drivers build config 01:16:15.245 common/idpf: not in enabled drivers build config 01:16:15.245 common/ionic: not in enabled drivers build config 01:16:15.245 common/mvep: not in enabled drivers build config 01:16:15.245 common/octeontx: not in enabled drivers build config 01:16:15.245 bus/auxiliary: not in enabled drivers build config 01:16:15.245 bus/cdx: not in enabled drivers build config 01:16:15.245 bus/dpaa: not in enabled drivers build config 01:16:15.245 bus/fslmc: not in enabled drivers build config 01:16:15.245 bus/ifpga: not in enabled drivers build config 01:16:15.245 bus/platform: not in enabled drivers build config 01:16:15.245 bus/uacce: not in enabled drivers build config 01:16:15.245 bus/vmbus: not in enabled drivers build config 01:16:15.245 common/cnxk: not in enabled drivers build config 01:16:15.245 common/mlx5: not in enabled drivers build config 01:16:15.245 common/nfp: not in enabled drivers build config 01:16:15.245 common/nitrox: not in enabled drivers build config 01:16:15.245 common/qat: not in enabled drivers build config 01:16:15.245 common/sfc_efx: not in enabled drivers build config 01:16:15.245 mempool/bucket: not in enabled drivers build config 01:16:15.245 mempool/cnxk: not in enabled drivers build config 01:16:15.245 mempool/dpaa: not in enabled drivers build config 01:16:15.245 mempool/dpaa2: not in enabled drivers build config 01:16:15.245 mempool/octeontx: not in enabled drivers build config 01:16:15.245 mempool/stack: not in enabled drivers build config 01:16:15.245 dma/cnxk: not in enabled drivers build config 01:16:15.245 dma/dpaa: not in enabled drivers build config 01:16:15.245 dma/dpaa2: not in enabled drivers build config 01:16:15.245 dma/hisilicon: not in enabled drivers build config 01:16:15.245 dma/idxd: not in enabled drivers build config 01:16:15.245 dma/ioat: not in enabled drivers build config 01:16:15.245 dma/skeleton: not in enabled drivers build config 01:16:15.245 net/af_packet: not in enabled drivers build config 01:16:15.245 net/af_xdp: not in enabled drivers build config 01:16:15.245 net/ark: not in enabled drivers build config 01:16:15.245 net/atlantic: not in enabled drivers build config 01:16:15.245 net/avp: not in enabled drivers build config 01:16:15.245 net/axgbe: not in enabled drivers build config 01:16:15.245 net/bnx2x: not in enabled drivers build config 01:16:15.245 net/bnxt: not in enabled drivers build config 01:16:15.245 net/bonding: not in enabled drivers build config 01:16:15.245 net/cnxk: not in enabled drivers build config 01:16:15.245 net/cpfl: not in enabled drivers build config 01:16:15.245 net/cxgbe: not in enabled drivers build config 01:16:15.245 net/dpaa: not in enabled drivers build config 01:16:15.245 net/dpaa2: not in enabled drivers build config 01:16:15.245 net/e1000: not in enabled drivers build config 01:16:15.245 net/ena: not in enabled drivers build config 01:16:15.245 net/enetc: not in enabled drivers build config 01:16:15.245 net/enetfec: not in enabled drivers build config 01:16:15.245 net/enic: not in enabled drivers build config 01:16:15.245 net/failsafe: not in enabled drivers build config 01:16:15.245 net/fm10k: not in enabled drivers build config 01:16:15.245 net/gve: not in enabled drivers build config 01:16:15.245 net/hinic: not in enabled drivers build config 01:16:15.245 net/hns3: not in enabled drivers build config 01:16:15.245 net/i40e: not in enabled drivers build config 01:16:15.245 net/iavf: not in enabled drivers build config 01:16:15.245 net/ice: not in enabled drivers build config 01:16:15.245 net/idpf: not in enabled drivers build config 01:16:15.245 net/igc: not in enabled drivers build config 01:16:15.245 net/ionic: not in enabled drivers build config 01:16:15.245 net/ipn3ke: not in enabled drivers build config 01:16:15.245 net/ixgbe: not in enabled drivers build config 01:16:15.245 net/mana: not in enabled drivers build config 01:16:15.245 net/memif: not in enabled drivers build config 01:16:15.245 net/mlx4: not in enabled drivers build config 01:16:15.245 net/mlx5: not in enabled drivers build config 01:16:15.245 net/mvneta: not in enabled drivers build config 01:16:15.245 net/mvpp2: not in enabled drivers build config 01:16:15.245 net/netvsc: not in enabled drivers build config 01:16:15.245 net/nfb: not in enabled drivers build config 01:16:15.245 net/nfp: not in enabled drivers build config 01:16:15.245 net/ngbe: not in enabled drivers build config 01:16:15.245 net/null: not in enabled drivers build config 01:16:15.245 net/octeontx: not in enabled drivers build config 01:16:15.245 net/octeon_ep: not in enabled drivers build config 01:16:15.245 net/pcap: not in enabled drivers build config 01:16:15.245 net/pfe: not in enabled drivers build config 01:16:15.245 net/qede: not in enabled drivers build config 01:16:15.246 net/ring: not in enabled drivers build config 01:16:15.246 net/sfc: not in enabled drivers build config 01:16:15.246 net/softnic: not in enabled drivers build config 01:16:15.246 net/tap: not in enabled drivers build config 01:16:15.246 net/thunderx: not in enabled drivers build config 01:16:15.246 net/txgbe: not in enabled drivers build config 01:16:15.246 net/vdev_netvsc: not in enabled drivers build config 01:16:15.246 net/vhost: not in enabled drivers build config 01:16:15.246 net/virtio: not in enabled drivers build config 01:16:15.246 net/vmxnet3: not in enabled drivers build config 01:16:15.246 raw/*: missing internal dependency, "rawdev" 01:16:15.246 crypto/armv8: not in enabled drivers build config 01:16:15.246 crypto/bcmfs: not in enabled drivers build config 01:16:15.246 crypto/caam_jr: not in enabled drivers build config 01:16:15.246 crypto/ccp: not in enabled drivers build config 01:16:15.246 crypto/cnxk: not in enabled drivers build config 01:16:15.246 crypto/dpaa_sec: not in enabled drivers build config 01:16:15.246 crypto/dpaa2_sec: not in enabled drivers build config 01:16:15.246 crypto/ipsec_mb: not in enabled drivers build config 01:16:15.246 crypto/mlx5: not in enabled drivers build config 01:16:15.246 crypto/mvsam: not in enabled drivers build config 01:16:15.246 crypto/nitrox: not in enabled drivers build config 01:16:15.246 crypto/null: not in enabled drivers build config 01:16:15.246 crypto/octeontx: not in enabled drivers build config 01:16:15.246 crypto/openssl: not in enabled drivers build config 01:16:15.246 crypto/scheduler: not in enabled drivers build config 01:16:15.246 crypto/uadk: not in enabled drivers build config 01:16:15.246 crypto/virtio: not in enabled drivers build config 01:16:15.246 compress/isal: not in enabled drivers build config 01:16:15.246 compress/mlx5: not in enabled drivers build config 01:16:15.246 compress/nitrox: not in enabled drivers build config 01:16:15.246 compress/octeontx: not in enabled drivers build config 01:16:15.246 compress/zlib: not in enabled drivers build config 01:16:15.246 regex/*: missing internal dependency, "regexdev" 01:16:15.246 ml/*: missing internal dependency, "mldev" 01:16:15.246 vdpa/ifc: not in enabled drivers build config 01:16:15.246 vdpa/mlx5: not in enabled drivers build config 01:16:15.246 vdpa/nfp: not in enabled drivers build config 01:16:15.246 vdpa/sfc: not in enabled drivers build config 01:16:15.246 event/*: missing internal dependency, "eventdev" 01:16:15.246 baseband/*: missing internal dependency, "bbdev" 01:16:15.246 gpu/*: missing internal dependency, "gpudev" 01:16:15.246 01:16:15.246 01:16:15.246 Build targets in project: 85 01:16:15.246 01:16:15.246 DPDK 24.03.0 01:16:15.246 01:16:15.246 User defined options 01:16:15.246 buildtype : debug 01:16:15.246 default_library : shared 01:16:15.246 libdir : lib 01:16:15.246 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 01:16:15.246 b_sanitize : address 01:16:15.246 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 01:16:15.246 c_link_args : 01:16:15.246 cpu_instruction_set: native 01:16:15.246 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 01:16:15.246 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 01:16:15.246 enable_docs : false 01:16:15.246 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 01:16:15.246 enable_kmods : false 01:16:15.246 max_lcores : 128 01:16:15.246 tests : false 01:16:15.246 01:16:15.246 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:16:15.246 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 01:16:15.505 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 01:16:15.505 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 01:16:15.505 [3/268] Linking static target lib/librte_kvargs.a 01:16:15.505 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 01:16:15.505 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 01:16:15.505 [6/268] Linking static target lib/librte_log.a 01:16:16.072 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 01:16:16.072 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 01:16:16.072 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 01:16:16.072 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 01:16:16.330 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 01:16:16.330 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 01:16:16.330 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 01:16:16.330 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 01:16:16.330 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 01:16:16.330 [16/268] Linking static target lib/librte_telemetry.a 01:16:16.330 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 01:16:16.330 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 01:16:16.588 [19/268] Linking target lib/librte_log.so.24.1 01:16:16.588 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 01:16:16.846 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 01:16:16.846 [22/268] Linking target lib/librte_kvargs.so.24.1 01:16:17.105 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 01:16:17.105 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 01:16:17.105 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 01:16:17.105 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 01:16:17.105 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 01:16:17.105 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 01:16:17.363 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 01:16:17.363 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 01:16:17.363 [31/268] Linking target lib/librte_telemetry.so.24.1 01:16:17.363 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 01:16:17.363 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 01:16:17.363 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 01:16:17.622 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 01:16:17.622 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 01:16:17.879 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 01:16:17.879 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 01:16:17.879 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 01:16:17.879 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 01:16:18.136 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 01:16:18.136 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 01:16:18.136 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 01:16:18.136 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 01:16:18.393 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 01:16:18.393 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 01:16:18.651 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 01:16:18.651 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 01:16:18.651 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 01:16:18.922 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 01:16:18.922 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 01:16:18.922 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 01:16:18.922 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 01:16:18.922 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 01:16:19.189 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 01:16:19.189 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 01:16:19.502 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 01:16:19.502 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 01:16:19.759 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 01:16:19.759 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 01:16:19.759 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 01:16:19.759 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 01:16:19.759 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 01:16:19.759 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 01:16:20.016 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 01:16:20.016 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 01:16:20.016 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 01:16:20.582 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 01:16:20.582 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 01:16:20.582 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 01:16:20.582 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 01:16:20.582 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 01:16:20.582 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 01:16:20.582 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 01:16:20.840 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 01:16:20.840 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 01:16:20.840 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 01:16:20.840 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 01:16:20.840 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 01:16:21.098 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 01:16:21.098 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 01:16:21.098 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 01:16:21.357 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 01:16:21.357 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 01:16:21.357 [85/268] Linking static target lib/librte_eal.a 01:16:21.357 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 01:16:21.616 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 01:16:21.616 [88/268] Linking static target lib/librte_ring.a 01:16:21.616 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 01:16:21.616 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 01:16:21.874 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 01:16:21.874 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 01:16:21.874 [93/268] Linking static target lib/librte_rcu.a 01:16:21.874 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 01:16:22.133 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 01:16:22.133 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 01:16:22.133 [97/268] Linking static target lib/librte_mempool.a 01:16:22.392 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 01:16:22.392 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 01:16:22.392 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 01:16:22.392 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 01:16:22.392 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 01:16:22.650 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 01:16:22.650 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 01:16:22.650 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 01:16:22.650 [106/268] Linking static target lib/librte_mbuf.a 01:16:22.650 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 01:16:22.909 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 01:16:22.909 [109/268] Linking static target lib/librte_meter.a 01:16:23.168 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 01:16:23.168 [111/268] Linking static target lib/librte_net.a 01:16:23.168 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 01:16:23.168 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 01:16:23.427 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 01:16:23.427 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 01:16:23.427 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 01:16:23.427 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 01:16:23.427 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 01:16:23.690 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 01:16:23.949 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 01:16:23.949 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 01:16:24.207 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 01:16:24.207 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 01:16:24.775 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 01:16:24.775 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 01:16:24.775 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 01:16:24.775 [127/268] Linking static target lib/librte_pci.a 01:16:24.775 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 01:16:24.775 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 01:16:24.775 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 01:16:25.034 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 01:16:25.034 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 01:16:25.034 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 01:16:25.034 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 01:16:25.034 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 01:16:25.034 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 01:16:25.034 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 01:16:25.034 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 01:16:25.292 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 01:16:25.292 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 01:16:25.292 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 01:16:25.292 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 01:16:25.292 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 01:16:25.292 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 01:16:25.549 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 01:16:25.549 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 01:16:25.549 [147/268] Linking static target lib/librte_cmdline.a 01:16:25.807 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 01:16:25.807 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 01:16:25.807 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 01:16:25.807 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 01:16:25.807 [152/268] Linking static target lib/librte_timer.a 01:16:26.065 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 01:16:26.323 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 01:16:26.581 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 01:16:26.581 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 01:16:26.581 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 01:16:26.581 [158/268] Linking static target lib/librte_compressdev.a 01:16:26.581 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 01:16:26.581 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 01:16:26.840 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 01:16:26.840 [162/268] Linking static target lib/librte_hash.a 01:16:26.840 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 01:16:26.840 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 01:16:26.840 [165/268] Linking static target lib/librte_ethdev.a 01:16:27.099 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 01:16:27.099 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 01:16:27.356 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 01:16:27.356 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 01:16:27.356 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 01:16:27.356 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 01:16:27.356 [172/268] Linking static target lib/librte_dmadev.a 01:16:27.356 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 01:16:27.615 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 01:16:27.873 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 01:16:27.873 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 01:16:28.131 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 01:16:28.131 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 01:16:28.131 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 01:16:28.131 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 01:16:28.388 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 01:16:28.388 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 01:16:28.646 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 01:16:28.646 [184/268] Linking static target lib/librte_cryptodev.a 01:16:28.646 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 01:16:28.646 [186/268] Linking static target lib/librte_power.a 01:16:28.904 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 01:16:29.163 [188/268] Linking static target lib/librte_reorder.a 01:16:29.163 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 01:16:29.163 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 01:16:29.163 [191/268] Linking static target lib/librte_security.a 01:16:29.163 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 01:16:29.163 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 01:16:29.730 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 01:16:29.730 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 01:16:29.989 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 01:16:29.989 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 01:16:30.248 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 01:16:30.508 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 01:16:30.508 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 01:16:30.767 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 01:16:30.767 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 01:16:31.025 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 01:16:31.025 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 01:16:31.025 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 01:16:31.284 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 01:16:31.285 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 01:16:31.544 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 01:16:31.544 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 01:16:31.544 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 01:16:31.544 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 01:16:31.803 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 01:16:31.803 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 01:16:31.803 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:16:31.803 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:16:31.803 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:16:31.803 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:16:31.803 [218/268] Linking static target drivers/librte_bus_vdev.a 01:16:31.803 [219/268] Linking static target drivers/librte_bus_pci.a 01:16:32.062 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 01:16:32.062 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 01:16:32.062 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 01:16:32.320 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 01:16:32.320 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:16:32.320 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:16:32.320 [226/268] Linking static target drivers/librte_mempool_ring.a 01:16:32.320 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 01:16:32.887 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 01:16:33.159 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 01:16:33.159 [230/268] Linking target lib/librte_eal.so.24.1 01:16:33.159 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 01:16:33.437 [232/268] Linking target lib/librte_ring.so.24.1 01:16:33.437 [233/268] Linking target lib/librte_meter.so.24.1 01:16:33.437 [234/268] Linking target lib/librte_pci.so.24.1 01:16:33.437 [235/268] Linking target lib/librte_timer.so.24.1 01:16:33.437 [236/268] Linking target lib/librte_dmadev.so.24.1 01:16:33.437 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 01:16:33.437 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 01:16:33.437 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 01:16:33.437 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 01:16:33.437 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 01:16:33.437 [242/268] Linking target lib/librte_rcu.so.24.1 01:16:33.437 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 01:16:33.437 [244/268] Linking target lib/librte_mempool.so.24.1 01:16:33.437 [245/268] Linking target drivers/librte_bus_pci.so.24.1 01:16:33.705 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 01:16:33.705 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 01:16:33.705 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 01:16:33.705 [249/268] Linking target lib/librte_mbuf.so.24.1 01:16:33.963 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 01:16:33.964 [251/268] Linking target lib/librte_compressdev.so.24.1 01:16:33.964 [252/268] Linking target lib/librte_reorder.so.24.1 01:16:33.964 [253/268] Linking target lib/librte_net.so.24.1 01:16:33.964 [254/268] Linking target lib/librte_cryptodev.so.24.1 01:16:33.964 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 01:16:33.964 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 01:16:33.964 [257/268] Linking target lib/librte_hash.so.24.1 01:16:33.964 [258/268] Linking target lib/librte_cmdline.so.24.1 01:16:34.223 [259/268] Linking target lib/librte_security.so.24.1 01:16:34.223 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 01:16:34.789 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 01:16:34.789 [262/268] Linking target lib/librte_ethdev.so.24.1 01:16:35.058 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 01:16:35.058 [264/268] Linking target lib/librte_power.so.24.1 01:16:36.969 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 01:16:36.969 [266/268] Linking static target lib/librte_vhost.a 01:16:38.345 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 01:16:38.345 [268/268] Linking target lib/librte_vhost.so.24.1 01:16:38.345 INFO: autodetecting backend as ninja 01:16:38.345 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 01:16:56.430 CC lib/ut_mock/mock.o 01:16:56.430 CC lib/log/log.o 01:16:56.430 CC lib/log/log_deprecated.o 01:16:56.430 CC lib/log/log_flags.o 01:16:56.430 CC lib/ut/ut.o 01:16:56.430 LIB libspdk_ut.a 01:16:56.430 LIB libspdk_ut_mock.a 01:16:56.430 LIB libspdk_log.a 01:16:56.430 SO libspdk_ut_mock.so.6.0 01:16:56.430 SO libspdk_ut.so.2.0 01:16:56.430 SO libspdk_log.so.7.1 01:16:56.688 SYMLINK libspdk_ut_mock.so 01:16:56.688 SYMLINK libspdk_ut.so 01:16:56.688 SYMLINK libspdk_log.so 01:16:56.688 CC lib/ioat/ioat.o 01:16:56.688 CXX lib/trace_parser/trace.o 01:16:56.947 CC lib/dma/dma.o 01:16:56.947 CC lib/util/base64.o 01:16:56.947 CC lib/util/bit_array.o 01:16:56.947 CC lib/util/crc16.o 01:16:56.947 CC lib/util/cpuset.o 01:16:56.947 CC lib/util/crc32.o 01:16:56.947 CC lib/util/crc32c.o 01:16:56.947 CC lib/vfio_user/host/vfio_user_pci.o 01:16:56.947 CC lib/util/crc32_ieee.o 01:16:56.947 CC lib/util/crc64.o 01:16:56.947 CC lib/vfio_user/host/vfio_user.o 01:16:56.947 CC lib/util/dif.o 01:16:57.204 LIB libspdk_dma.a 01:16:57.204 SO libspdk_dma.so.5.0 01:16:57.204 CC lib/util/fd.o 01:16:57.204 CC lib/util/fd_group.o 01:16:57.204 CC lib/util/file.o 01:16:57.204 SYMLINK libspdk_dma.so 01:16:57.204 CC lib/util/hexlify.o 01:16:57.204 CC lib/util/iov.o 01:16:57.204 LIB libspdk_ioat.a 01:16:57.204 CC lib/util/math.o 01:16:57.204 SO libspdk_ioat.so.7.0 01:16:57.204 LIB libspdk_vfio_user.a 01:16:57.204 CC lib/util/net.o 01:16:57.204 SO libspdk_vfio_user.so.5.0 01:16:57.462 SYMLINK libspdk_ioat.so 01:16:57.462 CC lib/util/pipe.o 01:16:57.462 CC lib/util/strerror_tls.o 01:16:57.462 CC lib/util/string.o 01:16:57.462 SYMLINK libspdk_vfio_user.so 01:16:57.462 CC lib/util/uuid.o 01:16:57.462 CC lib/util/xor.o 01:16:57.462 CC lib/util/zipf.o 01:16:57.462 CC lib/util/md5.o 01:16:57.721 LIB libspdk_util.a 01:16:57.980 SO libspdk_util.so.10.1 01:16:57.980 LIB libspdk_trace_parser.a 01:16:57.980 SO libspdk_trace_parser.so.6.0 01:16:57.980 SYMLINK libspdk_util.so 01:16:57.980 SYMLINK libspdk_trace_parser.so 01:16:58.238 CC lib/conf/conf.o 01:16:58.238 CC lib/rdma_utils/rdma_utils.o 01:16:58.238 CC lib/env_dpdk/env.o 01:16:58.238 CC lib/vmd/vmd.o 01:16:58.238 CC lib/idxd/idxd.o 01:16:58.238 CC lib/env_dpdk/pci.o 01:16:58.238 CC lib/env_dpdk/init.o 01:16:58.238 CC lib/env_dpdk/memory.o 01:16:58.238 CC lib/idxd/idxd_user.o 01:16:58.238 CC lib/json/json_parse.o 01:16:58.497 LIB libspdk_conf.a 01:16:58.497 CC lib/json/json_util.o 01:16:58.497 CC lib/env_dpdk/threads.o 01:16:58.497 SO libspdk_conf.so.6.0 01:16:58.497 LIB libspdk_rdma_utils.a 01:16:58.497 SO libspdk_rdma_utils.so.1.0 01:16:58.497 SYMLINK libspdk_conf.so 01:16:58.497 CC lib/vmd/led.o 01:16:58.497 SYMLINK libspdk_rdma_utils.so 01:16:58.497 CC lib/env_dpdk/pci_ioat.o 01:16:58.755 CC lib/env_dpdk/pci_virtio.o 01:16:58.755 CC lib/json/json_write.o 01:16:58.755 CC lib/env_dpdk/pci_vmd.o 01:16:58.755 CC lib/idxd/idxd_kernel.o 01:16:58.755 CC lib/env_dpdk/pci_idxd.o 01:16:58.755 CC lib/rdma_provider/common.o 01:16:58.755 CC lib/env_dpdk/pci_event.o 01:16:58.755 CC lib/rdma_provider/rdma_provider_verbs.o 01:16:59.035 CC lib/env_dpdk/sigbus_handler.o 01:16:59.035 CC lib/env_dpdk/pci_dpdk.o 01:16:59.035 CC lib/env_dpdk/pci_dpdk_2207.o 01:16:59.035 CC lib/env_dpdk/pci_dpdk_2211.o 01:16:59.035 LIB libspdk_idxd.a 01:16:59.035 LIB libspdk_json.a 01:16:59.035 SO libspdk_json.so.6.0 01:16:59.035 LIB libspdk_rdma_provider.a 01:16:59.035 SO libspdk_idxd.so.12.1 01:16:59.035 LIB libspdk_vmd.a 01:16:59.035 SO libspdk_rdma_provider.so.7.0 01:16:59.035 SO libspdk_vmd.so.6.0 01:16:59.294 SYMLINK libspdk_json.so 01:16:59.294 SYMLINK libspdk_idxd.so 01:16:59.294 SYMLINK libspdk_rdma_provider.so 01:16:59.294 SYMLINK libspdk_vmd.so 01:16:59.294 CC lib/jsonrpc/jsonrpc_server.o 01:16:59.294 CC lib/jsonrpc/jsonrpc_server_tcp.o 01:16:59.294 CC lib/jsonrpc/jsonrpc_client.o 01:16:59.294 CC lib/jsonrpc/jsonrpc_client_tcp.o 01:16:59.552 LIB libspdk_jsonrpc.a 01:16:59.810 SO libspdk_jsonrpc.so.6.0 01:16:59.810 SYMLINK libspdk_jsonrpc.so 01:17:00.069 LIB libspdk_env_dpdk.a 01:17:00.069 CC lib/rpc/rpc.o 01:17:00.069 SO libspdk_env_dpdk.so.15.1 01:17:00.328 SYMLINK libspdk_env_dpdk.so 01:17:00.328 LIB libspdk_rpc.a 01:17:00.328 SO libspdk_rpc.so.6.0 01:17:00.328 SYMLINK libspdk_rpc.so 01:17:00.586 CC lib/notify/notify.o 01:17:00.586 CC lib/notify/notify_rpc.o 01:17:00.586 CC lib/keyring/keyring_rpc.o 01:17:00.586 CC lib/keyring/keyring.o 01:17:00.586 CC lib/trace/trace_flags.o 01:17:00.586 CC lib/trace/trace.o 01:17:00.586 CC lib/trace/trace_rpc.o 01:17:00.845 LIB libspdk_notify.a 01:17:00.845 SO libspdk_notify.so.6.0 01:17:00.845 SYMLINK libspdk_notify.so 01:17:00.845 LIB libspdk_keyring.a 01:17:01.104 LIB libspdk_trace.a 01:17:01.104 SO libspdk_keyring.so.2.0 01:17:01.104 SO libspdk_trace.so.11.0 01:17:01.104 SYMLINK libspdk_keyring.so 01:17:01.104 SYMLINK libspdk_trace.so 01:17:01.377 CC lib/sock/sock.o 01:17:01.377 CC lib/sock/sock_rpc.o 01:17:01.377 CC lib/thread/iobuf.o 01:17:01.377 CC lib/thread/thread.o 01:17:01.985 LIB libspdk_sock.a 01:17:01.985 SO libspdk_sock.so.10.0 01:17:01.985 SYMLINK libspdk_sock.so 01:17:02.243 CC lib/nvme/nvme_ctrlr_cmd.o 01:17:02.243 CC lib/nvme/nvme_ctrlr.o 01:17:02.243 CC lib/nvme/nvme_fabric.o 01:17:02.243 CC lib/nvme/nvme_ns_cmd.o 01:17:02.243 CC lib/nvme/nvme_ns.o 01:17:02.243 CC lib/nvme/nvme_pcie_common.o 01:17:02.243 CC lib/nvme/nvme_pcie.o 01:17:02.243 CC lib/nvme/nvme.o 01:17:02.243 CC lib/nvme/nvme_qpair.o 01:17:03.179 CC lib/nvme/nvme_quirks.o 01:17:03.179 CC lib/nvme/nvme_transport.o 01:17:03.179 CC lib/nvme/nvme_discovery.o 01:17:03.179 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 01:17:03.438 CC lib/nvme/nvme_ns_ocssd_cmd.o 01:17:03.438 LIB libspdk_thread.a 01:17:03.438 SO libspdk_thread.so.11.0 01:17:03.438 CC lib/nvme/nvme_tcp.o 01:17:03.438 CC lib/nvme/nvme_opal.o 01:17:03.438 SYMLINK libspdk_thread.so 01:17:03.438 CC lib/nvme/nvme_io_msg.o 01:17:03.705 CC lib/nvme/nvme_poll_group.o 01:17:03.705 CC lib/nvme/nvme_zns.o 01:17:03.962 CC lib/nvme/nvme_stubs.o 01:17:03.962 CC lib/nvme/nvme_auth.o 01:17:03.962 CC lib/nvme/nvme_cuse.o 01:17:03.962 CC lib/nvme/nvme_rdma.o 01:17:04.220 CC lib/accel/accel.o 01:17:04.220 CC lib/blob/blobstore.o 01:17:04.478 CC lib/blob/request.o 01:17:04.478 CC lib/init/json_config.o 01:17:04.478 CC lib/virtio/virtio.o 01:17:04.735 CC lib/virtio/virtio_vhost_user.o 01:17:04.735 CC lib/init/subsystem.o 01:17:04.735 CC lib/init/subsystem_rpc.o 01:17:04.993 CC lib/init/rpc.o 01:17:04.993 CC lib/accel/accel_rpc.o 01:17:04.993 CC lib/accel/accel_sw.o 01:17:04.993 CC lib/virtio/virtio_vfio_user.o 01:17:04.993 LIB libspdk_init.a 01:17:05.251 CC lib/fsdev/fsdev.o 01:17:05.251 SO libspdk_init.so.6.0 01:17:05.251 CC lib/fsdev/fsdev_io.o 01:17:05.251 SYMLINK libspdk_init.so 01:17:05.251 CC lib/fsdev/fsdev_rpc.o 01:17:05.251 CC lib/virtio/virtio_pci.o 01:17:05.251 CC lib/blob/zeroes.o 01:17:05.251 CC lib/blob/blob_bs_dev.o 01:17:05.508 CC lib/event/reactor.o 01:17:05.508 CC lib/event/app.o 01:17:05.508 CC lib/event/log_rpc.o 01:17:05.508 LIB libspdk_virtio.a 01:17:05.508 CC lib/event/app_rpc.o 01:17:05.509 CC lib/event/scheduler_static.o 01:17:05.819 LIB libspdk_accel.a 01:17:05.819 SO libspdk_virtio.so.7.0 01:17:05.819 SO libspdk_accel.so.16.0 01:17:05.819 LIB libspdk_nvme.a 01:17:05.819 SYMLINK libspdk_virtio.so 01:17:05.819 SYMLINK libspdk_accel.so 01:17:05.819 LIB libspdk_fsdev.a 01:17:05.819 SO libspdk_fsdev.so.2.0 01:17:06.075 SO libspdk_nvme.so.15.0 01:17:06.075 SYMLINK libspdk_fsdev.so 01:17:06.075 LIB libspdk_event.a 01:17:06.075 CC lib/bdev/bdev.o 01:17:06.075 CC lib/bdev/bdev_rpc.o 01:17:06.075 CC lib/bdev/bdev_zone.o 01:17:06.075 CC lib/bdev/part.o 01:17:06.075 CC lib/bdev/scsi_nvme.o 01:17:06.075 SO libspdk_event.so.14.0 01:17:06.075 SYMLINK libspdk_event.so 01:17:06.075 CC lib/fuse_dispatcher/fuse_dispatcher.o 01:17:06.331 SYMLINK libspdk_nvme.so 01:17:06.894 LIB libspdk_fuse_dispatcher.a 01:17:06.894 SO libspdk_fuse_dispatcher.so.1.0 01:17:06.894 SYMLINK libspdk_fuse_dispatcher.so 01:17:08.266 LIB libspdk_blob.a 01:17:08.266 SO libspdk_blob.so.12.0 01:17:08.524 SYMLINK libspdk_blob.so 01:17:08.781 CC lib/lvol/lvol.o 01:17:08.781 CC lib/blobfs/blobfs.o 01:17:08.781 CC lib/blobfs/tree.o 01:17:09.348 LIB libspdk_bdev.a 01:17:09.348 SO libspdk_bdev.so.17.0 01:17:09.606 SYMLINK libspdk_bdev.so 01:17:09.606 CC lib/nbd/nbd.o 01:17:09.606 CC lib/nbd/nbd_rpc.o 01:17:09.606 CC lib/ublk/ublk.o 01:17:09.606 CC lib/ublk/ublk_rpc.o 01:17:09.606 CC lib/ftl/ftl_core.o 01:17:09.606 CC lib/ftl/ftl_init.o 01:17:09.606 CC lib/scsi/dev.o 01:17:09.606 CC lib/nvmf/ctrlr.o 01:17:09.864 LIB libspdk_blobfs.a 01:17:09.864 SO libspdk_blobfs.so.11.0 01:17:09.864 LIB libspdk_lvol.a 01:17:09.864 CC lib/nvmf/ctrlr_discovery.o 01:17:09.864 SYMLINK libspdk_blobfs.so 01:17:09.864 CC lib/nvmf/ctrlr_bdev.o 01:17:09.864 SO libspdk_lvol.so.11.0 01:17:09.864 CC lib/nvmf/subsystem.o 01:17:09.865 CC lib/nvmf/nvmf.o 01:17:10.123 SYMLINK libspdk_lvol.so 01:17:10.123 CC lib/nvmf/nvmf_rpc.o 01:17:10.123 CC lib/scsi/lun.o 01:17:10.123 CC lib/ftl/ftl_layout.o 01:17:10.123 LIB libspdk_nbd.a 01:17:10.123 SO libspdk_nbd.so.7.0 01:17:10.382 SYMLINK libspdk_nbd.so 01:17:10.382 CC lib/scsi/port.o 01:17:10.382 CC lib/ftl/ftl_debug.o 01:17:10.382 CC lib/nvmf/transport.o 01:17:10.642 LIB libspdk_ublk.a 01:17:10.642 CC lib/scsi/scsi.o 01:17:10.642 SO libspdk_ublk.so.3.0 01:17:10.642 CC lib/scsi/scsi_bdev.o 01:17:10.642 SYMLINK libspdk_ublk.so 01:17:10.642 CC lib/ftl/ftl_io.o 01:17:10.642 CC lib/ftl/ftl_sb.o 01:17:10.642 CC lib/ftl/ftl_l2p.o 01:17:10.901 CC lib/nvmf/tcp.o 01:17:10.901 CC lib/nvmf/stubs.o 01:17:10.901 CC lib/ftl/ftl_l2p_flat.o 01:17:10.901 CC lib/scsi/scsi_pr.o 01:17:11.159 CC lib/nvmf/mdns_server.o 01:17:11.159 CC lib/nvmf/rdma.o 01:17:11.159 CC lib/nvmf/auth.o 01:17:11.159 CC lib/ftl/ftl_nv_cache.o 01:17:11.417 CC lib/scsi/scsi_rpc.o 01:17:11.417 CC lib/scsi/task.o 01:17:11.417 CC lib/ftl/ftl_band.o 01:17:11.417 CC lib/ftl/ftl_band_ops.o 01:17:11.417 CC lib/ftl/ftl_writer.o 01:17:11.675 CC lib/ftl/ftl_rq.o 01:17:11.675 LIB libspdk_scsi.a 01:17:11.675 SO libspdk_scsi.so.9.0 01:17:11.675 SYMLINK libspdk_scsi.so 01:17:11.675 CC lib/ftl/ftl_reloc.o 01:17:11.934 CC lib/ftl/ftl_l2p_cache.o 01:17:11.934 CC lib/ftl/ftl_p2l.o 01:17:11.934 CC lib/iscsi/conn.o 01:17:11.934 CC lib/vhost/vhost.o 01:17:11.934 CC lib/iscsi/init_grp.o 01:17:12.193 CC lib/vhost/vhost_rpc.o 01:17:12.193 CC lib/ftl/ftl_p2l_log.o 01:17:12.193 CC lib/ftl/mngt/ftl_mngt.o 01:17:12.470 CC lib/ftl/mngt/ftl_mngt_bdev.o 01:17:12.470 CC lib/vhost/vhost_scsi.o 01:17:12.470 CC lib/vhost/vhost_blk.o 01:17:12.470 CC lib/ftl/mngt/ftl_mngt_shutdown.o 01:17:12.729 CC lib/vhost/rte_vhost_user.o 01:17:12.729 CC lib/iscsi/iscsi.o 01:17:12.729 CC lib/iscsi/param.o 01:17:12.729 CC lib/ftl/mngt/ftl_mngt_startup.o 01:17:12.729 CC lib/ftl/mngt/ftl_mngt_md.o 01:17:12.729 CC lib/iscsi/portal_grp.o 01:17:12.987 CC lib/iscsi/tgt_node.o 01:17:12.987 CC lib/ftl/mngt/ftl_mngt_misc.o 01:17:12.987 CC lib/iscsi/iscsi_subsystem.o 01:17:13.246 CC lib/iscsi/iscsi_rpc.o 01:17:13.246 CC lib/iscsi/task.o 01:17:13.246 CC lib/ftl/mngt/ftl_mngt_ioch.o 01:17:13.504 CC lib/ftl/mngt/ftl_mngt_l2p.o 01:17:13.504 CC lib/ftl/mngt/ftl_mngt_band.o 01:17:13.504 CC lib/ftl/mngt/ftl_mngt_self_test.o 01:17:13.504 LIB libspdk_nvmf.a 01:17:13.504 CC lib/ftl/mngt/ftl_mngt_p2l.o 01:17:13.761 CC lib/ftl/mngt/ftl_mngt_recovery.o 01:17:13.761 CC lib/ftl/mngt/ftl_mngt_upgrade.o 01:17:13.761 CC lib/ftl/utils/ftl_conf.o 01:17:13.761 CC lib/ftl/utils/ftl_md.o 01:17:13.761 SO libspdk_nvmf.so.20.0 01:17:13.761 CC lib/ftl/utils/ftl_mempool.o 01:17:13.761 LIB libspdk_vhost.a 01:17:13.761 CC lib/ftl/utils/ftl_bitmap.o 01:17:13.761 CC lib/ftl/utils/ftl_property.o 01:17:14.018 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 01:17:14.018 SO libspdk_vhost.so.8.0 01:17:14.018 CC lib/ftl/upgrade/ftl_layout_upgrade.o 01:17:14.018 CC lib/ftl/upgrade/ftl_sb_upgrade.o 01:17:14.018 SYMLINK libspdk_nvmf.so 01:17:14.018 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 01:17:14.018 SYMLINK libspdk_vhost.so 01:17:14.018 CC lib/ftl/upgrade/ftl_band_upgrade.o 01:17:14.018 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 01:17:14.018 CC lib/ftl/upgrade/ftl_trim_upgrade.o 01:17:14.277 CC lib/ftl/upgrade/ftl_sb_v3.o 01:17:14.277 CC lib/ftl/upgrade/ftl_sb_v5.o 01:17:14.277 CC lib/ftl/nvc/ftl_nvc_dev.o 01:17:14.277 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 01:17:14.277 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 01:17:14.277 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 01:17:14.277 CC lib/ftl/base/ftl_base_dev.o 01:17:14.277 CC lib/ftl/base/ftl_base_bdev.o 01:17:14.277 CC lib/ftl/ftl_trace.o 01:17:14.535 LIB libspdk_iscsi.a 01:17:14.535 LIB libspdk_ftl.a 01:17:14.535 SO libspdk_iscsi.so.8.0 01:17:14.792 SYMLINK libspdk_iscsi.so 01:17:14.792 SO libspdk_ftl.so.9.0 01:17:15.050 SYMLINK libspdk_ftl.so 01:17:15.307 CC module/env_dpdk/env_dpdk_rpc.o 01:17:15.564 CC module/blob/bdev/blob_bdev.o 01:17:15.564 CC module/fsdev/aio/fsdev_aio.o 01:17:15.565 CC module/scheduler/dpdk_governor/dpdk_governor.o 01:17:15.565 CC module/accel/error/accel_error.o 01:17:15.565 CC module/scheduler/dynamic/scheduler_dynamic.o 01:17:15.565 CC module/sock/posix/posix.o 01:17:15.565 CC module/keyring/file/keyring.o 01:17:15.565 CC module/accel/ioat/accel_ioat.o 01:17:15.565 CC module/keyring/linux/keyring.o 01:17:15.565 LIB libspdk_env_dpdk_rpc.a 01:17:15.565 SO libspdk_env_dpdk_rpc.so.6.0 01:17:15.565 SYMLINK libspdk_env_dpdk_rpc.so 01:17:15.565 CC module/keyring/linux/keyring_rpc.o 01:17:15.565 CC module/keyring/file/keyring_rpc.o 01:17:15.565 LIB libspdk_scheduler_dpdk_governor.a 01:17:15.565 SO libspdk_scheduler_dpdk_governor.so.4.0 01:17:15.823 CC module/accel/ioat/accel_ioat_rpc.o 01:17:15.823 LIB libspdk_scheduler_dynamic.a 01:17:15.823 CC module/accel/error/accel_error_rpc.o 01:17:15.823 LIB libspdk_keyring_linux.a 01:17:15.823 SO libspdk_scheduler_dynamic.so.4.0 01:17:15.823 SYMLINK libspdk_scheduler_dpdk_governor.so 01:17:15.823 SO libspdk_keyring_linux.so.1.0 01:17:15.823 LIB libspdk_blob_bdev.a 01:17:15.823 SO libspdk_blob_bdev.so.12.0 01:17:15.823 SYMLINK libspdk_scheduler_dynamic.so 01:17:15.823 LIB libspdk_keyring_file.a 01:17:15.823 SYMLINK libspdk_keyring_linux.so 01:17:15.823 CC module/accel/dsa/accel_dsa.o 01:17:15.823 CC module/accel/dsa/accel_dsa_rpc.o 01:17:15.823 LIB libspdk_accel_ioat.a 01:17:15.823 SO libspdk_keyring_file.so.2.0 01:17:15.823 SYMLINK libspdk_blob_bdev.so 01:17:15.823 SO libspdk_accel_ioat.so.6.0 01:17:15.823 LIB libspdk_accel_error.a 01:17:15.823 SYMLINK libspdk_keyring_file.so 01:17:15.823 SO libspdk_accel_error.so.2.0 01:17:15.823 CC module/scheduler/gscheduler/gscheduler.o 01:17:15.823 SYMLINK libspdk_accel_ioat.so 01:17:16.081 SYMLINK libspdk_accel_error.so 01:17:16.081 CC module/fsdev/aio/fsdev_aio_rpc.o 01:17:16.081 CC module/accel/iaa/accel_iaa.o 01:17:16.081 CC module/fsdev/aio/linux_aio_mgr.o 01:17:16.081 LIB libspdk_scheduler_gscheduler.a 01:17:16.081 SO libspdk_scheduler_gscheduler.so.4.0 01:17:16.081 CC module/bdev/delay/vbdev_delay.o 01:17:16.081 CC module/bdev/error/vbdev_error.o 01:17:16.081 CC module/bdev/delay/vbdev_delay_rpc.o 01:17:16.081 CC module/blobfs/bdev/blobfs_bdev.o 01:17:16.081 SYMLINK libspdk_scheduler_gscheduler.so 01:17:16.081 CC module/blobfs/bdev/blobfs_bdev_rpc.o 01:17:16.081 LIB libspdk_accel_dsa.a 01:17:16.081 CC module/accel/iaa/accel_iaa_rpc.o 01:17:16.339 SO libspdk_accel_dsa.so.5.0 01:17:16.339 CC module/bdev/error/vbdev_error_rpc.o 01:17:16.339 SYMLINK libspdk_accel_dsa.so 01:17:16.339 LIB libspdk_fsdev_aio.a 01:17:16.339 LIB libspdk_accel_iaa.a 01:17:16.339 SO libspdk_fsdev_aio.so.1.0 01:17:16.339 SO libspdk_accel_iaa.so.3.0 01:17:16.339 LIB libspdk_sock_posix.a 01:17:16.339 LIB libspdk_blobfs_bdev.a 01:17:16.339 LIB libspdk_bdev_error.a 01:17:16.339 SO libspdk_blobfs_bdev.so.6.0 01:17:16.339 SO libspdk_sock_posix.so.6.0 01:17:16.339 SYMLINK libspdk_accel_iaa.so 01:17:16.339 SYMLINK libspdk_fsdev_aio.so 01:17:16.596 CC module/bdev/gpt/gpt.o 01:17:16.596 SO libspdk_bdev_error.so.6.0 01:17:16.596 SYMLINK libspdk_blobfs_bdev.so 01:17:16.596 CC module/bdev/malloc/bdev_malloc.o 01:17:16.596 CC module/bdev/lvol/vbdev_lvol.o 01:17:16.596 CC module/bdev/malloc/bdev_malloc_rpc.o 01:17:16.596 SYMLINK libspdk_bdev_error.so 01:17:16.596 LIB libspdk_bdev_delay.a 01:17:16.596 SYMLINK libspdk_sock_posix.so 01:17:16.596 SO libspdk_bdev_delay.so.6.0 01:17:16.596 CC module/bdev/null/bdev_null.o 01:17:16.596 CC module/bdev/passthru/vbdev_passthru.o 01:17:16.596 CC module/bdev/nvme/bdev_nvme.o 01:17:16.596 CC module/bdev/gpt/vbdev_gpt.o 01:17:16.596 SYMLINK libspdk_bdev_delay.so 01:17:16.596 CC module/bdev/lvol/vbdev_lvol_rpc.o 01:17:16.853 CC module/bdev/split/vbdev_split.o 01:17:16.853 CC module/bdev/raid/bdev_raid.o 01:17:16.853 CC module/bdev/null/bdev_null_rpc.o 01:17:16.853 CC module/bdev/passthru/vbdev_passthru_rpc.o 01:17:16.853 LIB libspdk_bdev_null.a 01:17:17.111 SO libspdk_bdev_null.so.6.0 01:17:17.111 CC module/bdev/split/vbdev_split_rpc.o 01:17:17.111 CC module/bdev/nvme/bdev_nvme_rpc.o 01:17:17.111 LIB libspdk_bdev_gpt.a 01:17:17.111 LIB libspdk_bdev_malloc.a 01:17:17.111 SO libspdk_bdev_gpt.so.6.0 01:17:17.111 SO libspdk_bdev_malloc.so.6.0 01:17:17.111 SYMLINK libspdk_bdev_null.so 01:17:17.111 CC module/bdev/raid/bdev_raid_rpc.o 01:17:17.111 SYMLINK libspdk_bdev_gpt.so 01:17:17.111 SYMLINK libspdk_bdev_malloc.so 01:17:17.111 LIB libspdk_bdev_passthru.a 01:17:17.111 SO libspdk_bdev_passthru.so.6.0 01:17:17.111 LIB libspdk_bdev_split.a 01:17:17.111 CC module/bdev/raid/bdev_raid_sb.o 01:17:17.111 LIB libspdk_bdev_lvol.a 01:17:17.111 SYMLINK libspdk_bdev_passthru.so 01:17:17.369 SO libspdk_bdev_split.so.6.0 01:17:17.369 SO libspdk_bdev_lvol.so.6.0 01:17:17.369 CC module/bdev/aio/bdev_aio.o 01:17:17.369 CC module/bdev/zone_block/vbdev_zone_block.o 01:17:17.369 SYMLINK libspdk_bdev_split.so 01:17:17.369 SYMLINK libspdk_bdev_lvol.so 01:17:17.369 CC module/bdev/nvme/nvme_rpc.o 01:17:17.369 CC module/bdev/nvme/bdev_mdns_client.o 01:17:17.369 CC module/bdev/ftl/bdev_ftl.o 01:17:17.369 CC module/bdev/iscsi/bdev_iscsi.o 01:17:17.627 CC module/bdev/nvme/vbdev_opal.o 01:17:17.627 CC module/bdev/nvme/vbdev_opal_rpc.o 01:17:17.627 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 01:17:17.627 CC module/bdev/aio/bdev_aio_rpc.o 01:17:17.627 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 01:17:17.627 CC module/bdev/ftl/bdev_ftl_rpc.o 01:17:17.884 CC module/bdev/iscsi/bdev_iscsi_rpc.o 01:17:17.884 CC module/bdev/raid/raid0.o 01:17:17.884 CC module/bdev/raid/raid1.o 01:17:17.884 LIB libspdk_bdev_zone_block.a 01:17:17.884 CC module/bdev/raid/concat.o 01:17:17.884 LIB libspdk_bdev_aio.a 01:17:17.884 SO libspdk_bdev_zone_block.so.6.0 01:17:17.884 SO libspdk_bdev_aio.so.6.0 01:17:17.884 LIB libspdk_bdev_iscsi.a 01:17:17.884 LIB libspdk_bdev_ftl.a 01:17:17.884 CC module/bdev/raid/raid5f.o 01:17:18.142 SYMLINK libspdk_bdev_aio.so 01:17:18.142 SO libspdk_bdev_ftl.so.6.0 01:17:18.142 SO libspdk_bdev_iscsi.so.6.0 01:17:18.142 SYMLINK libspdk_bdev_zone_block.so 01:17:18.142 CC module/bdev/virtio/bdev_virtio_scsi.o 01:17:18.142 CC module/bdev/virtio/bdev_virtio_blk.o 01:17:18.142 CC module/bdev/virtio/bdev_virtio_rpc.o 01:17:18.142 SYMLINK libspdk_bdev_ftl.so 01:17:18.142 SYMLINK libspdk_bdev_iscsi.so 01:17:18.706 LIB libspdk_bdev_raid.a 01:17:18.706 SO libspdk_bdev_raid.so.6.0 01:17:18.706 LIB libspdk_bdev_virtio.a 01:17:18.706 SO libspdk_bdev_virtio.so.6.0 01:17:18.706 SYMLINK libspdk_bdev_raid.so 01:17:18.706 SYMLINK libspdk_bdev_virtio.so 01:17:19.645 LIB libspdk_bdev_nvme.a 01:17:19.645 SO libspdk_bdev_nvme.so.7.1 01:17:19.903 SYMLINK libspdk_bdev_nvme.so 01:17:20.476 CC module/event/subsystems/iobuf/iobuf.o 01:17:20.476 CC module/event/subsystems/iobuf/iobuf_rpc.o 01:17:20.476 CC module/event/subsystems/fsdev/fsdev.o 01:17:20.476 CC module/event/subsystems/vmd/vmd.o 01:17:20.476 CC module/event/subsystems/sock/sock.o 01:17:20.476 CC module/event/subsystems/vmd/vmd_rpc.o 01:17:20.476 CC module/event/subsystems/keyring/keyring.o 01:17:20.476 CC module/event/subsystems/vhost_blk/vhost_blk.o 01:17:20.476 CC module/event/subsystems/scheduler/scheduler.o 01:17:20.476 LIB libspdk_event_vhost_blk.a 01:17:20.476 LIB libspdk_event_fsdev.a 01:17:20.476 LIB libspdk_event_keyring.a 01:17:20.476 SO libspdk_event_vhost_blk.so.3.0 01:17:20.476 LIB libspdk_event_vmd.a 01:17:20.476 LIB libspdk_event_sock.a 01:17:20.786 SO libspdk_event_fsdev.so.1.0 01:17:20.786 LIB libspdk_event_scheduler.a 01:17:20.786 LIB libspdk_event_iobuf.a 01:17:20.786 SO libspdk_event_keyring.so.1.0 01:17:20.786 SO libspdk_event_sock.so.5.0 01:17:20.786 SO libspdk_event_vmd.so.6.0 01:17:20.786 SO libspdk_event_scheduler.so.4.0 01:17:20.786 SYMLINK libspdk_event_vhost_blk.so 01:17:20.786 SO libspdk_event_iobuf.so.3.0 01:17:20.786 SYMLINK libspdk_event_fsdev.so 01:17:20.786 SYMLINK libspdk_event_keyring.so 01:17:20.786 SYMLINK libspdk_event_sock.so 01:17:20.786 SYMLINK libspdk_event_scheduler.so 01:17:20.786 SYMLINK libspdk_event_vmd.so 01:17:20.786 SYMLINK libspdk_event_iobuf.so 01:17:21.044 CC module/event/subsystems/accel/accel.o 01:17:21.044 LIB libspdk_event_accel.a 01:17:21.302 SO libspdk_event_accel.so.6.0 01:17:21.302 SYMLINK libspdk_event_accel.so 01:17:21.560 CC module/event/subsystems/bdev/bdev.o 01:17:21.818 LIB libspdk_event_bdev.a 01:17:21.818 SO libspdk_event_bdev.so.6.0 01:17:21.818 SYMLINK libspdk_event_bdev.so 01:17:22.076 CC module/event/subsystems/nvmf/nvmf_rpc.o 01:17:22.076 CC module/event/subsystems/nvmf/nvmf_tgt.o 01:17:22.076 CC module/event/subsystems/ublk/ublk.o 01:17:22.076 CC module/event/subsystems/nbd/nbd.o 01:17:22.076 CC module/event/subsystems/scsi/scsi.o 01:17:22.336 LIB libspdk_event_nbd.a 01:17:22.336 LIB libspdk_event_ublk.a 01:17:22.336 SO libspdk_event_nbd.so.6.0 01:17:22.336 SO libspdk_event_ublk.so.3.0 01:17:22.336 LIB libspdk_event_scsi.a 01:17:22.336 SO libspdk_event_scsi.so.6.0 01:17:22.336 SYMLINK libspdk_event_nbd.so 01:17:22.336 SYMLINK libspdk_event_ublk.so 01:17:22.336 LIB libspdk_event_nvmf.a 01:17:22.336 SYMLINK libspdk_event_scsi.so 01:17:22.336 SO libspdk_event_nvmf.so.6.0 01:17:22.594 SYMLINK libspdk_event_nvmf.so 01:17:22.594 CC module/event/subsystems/iscsi/iscsi.o 01:17:22.594 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 01:17:22.852 LIB libspdk_event_vhost_scsi.a 01:17:22.852 LIB libspdk_event_iscsi.a 01:17:22.852 SO libspdk_event_vhost_scsi.so.3.0 01:17:22.852 SO libspdk_event_iscsi.so.6.0 01:17:22.852 SYMLINK libspdk_event_vhost_scsi.so 01:17:22.852 SYMLINK libspdk_event_iscsi.so 01:17:23.110 SO libspdk.so.6.0 01:17:23.110 SYMLINK libspdk.so 01:17:23.368 TEST_HEADER include/spdk/accel.h 01:17:23.368 TEST_HEADER include/spdk/accel_module.h 01:17:23.368 TEST_HEADER include/spdk/assert.h 01:17:23.368 TEST_HEADER include/spdk/barrier.h 01:17:23.368 TEST_HEADER include/spdk/base64.h 01:17:23.368 TEST_HEADER include/spdk/bdev.h 01:17:23.368 TEST_HEADER include/spdk/bdev_module.h 01:17:23.368 CC test/rpc_client/rpc_client_test.o 01:17:23.368 TEST_HEADER include/spdk/bdev_zone.h 01:17:23.368 TEST_HEADER include/spdk/bit_array.h 01:17:23.368 TEST_HEADER include/spdk/bit_pool.h 01:17:23.368 TEST_HEADER include/spdk/blob_bdev.h 01:17:23.368 CXX app/trace/trace.o 01:17:23.368 TEST_HEADER include/spdk/blobfs_bdev.h 01:17:23.368 TEST_HEADER include/spdk/blobfs.h 01:17:23.368 TEST_HEADER include/spdk/blob.h 01:17:23.368 TEST_HEADER include/spdk/conf.h 01:17:23.368 TEST_HEADER include/spdk/config.h 01:17:23.368 CC examples/interrupt_tgt/interrupt_tgt.o 01:17:23.368 TEST_HEADER include/spdk/cpuset.h 01:17:23.368 TEST_HEADER include/spdk/crc16.h 01:17:23.368 TEST_HEADER include/spdk/crc32.h 01:17:23.368 TEST_HEADER include/spdk/crc64.h 01:17:23.368 TEST_HEADER include/spdk/dif.h 01:17:23.368 TEST_HEADER include/spdk/dma.h 01:17:23.368 TEST_HEADER include/spdk/endian.h 01:17:23.368 TEST_HEADER include/spdk/env_dpdk.h 01:17:23.368 TEST_HEADER include/spdk/env.h 01:17:23.368 TEST_HEADER include/spdk/event.h 01:17:23.368 TEST_HEADER include/spdk/fd_group.h 01:17:23.368 TEST_HEADER include/spdk/fd.h 01:17:23.368 TEST_HEADER include/spdk/file.h 01:17:23.368 TEST_HEADER include/spdk/fsdev.h 01:17:23.368 TEST_HEADER include/spdk/fsdev_module.h 01:17:23.368 TEST_HEADER include/spdk/ftl.h 01:17:23.368 TEST_HEADER include/spdk/fuse_dispatcher.h 01:17:23.368 TEST_HEADER include/spdk/gpt_spec.h 01:17:23.368 CC test/thread/poller_perf/poller_perf.o 01:17:23.368 TEST_HEADER include/spdk/hexlify.h 01:17:23.368 TEST_HEADER include/spdk/histogram_data.h 01:17:23.368 TEST_HEADER include/spdk/idxd.h 01:17:23.368 TEST_HEADER include/spdk/idxd_spec.h 01:17:23.368 CC examples/util/zipf/zipf.o 01:17:23.368 CC examples/ioat/perf/perf.o 01:17:23.368 TEST_HEADER include/spdk/init.h 01:17:23.368 TEST_HEADER include/spdk/ioat.h 01:17:23.368 TEST_HEADER include/spdk/ioat_spec.h 01:17:23.368 TEST_HEADER include/spdk/iscsi_spec.h 01:17:23.368 TEST_HEADER include/spdk/json.h 01:17:23.368 TEST_HEADER include/spdk/jsonrpc.h 01:17:23.368 TEST_HEADER include/spdk/keyring.h 01:17:23.368 TEST_HEADER include/spdk/keyring_module.h 01:17:23.368 TEST_HEADER include/spdk/likely.h 01:17:23.368 TEST_HEADER include/spdk/log.h 01:17:23.368 TEST_HEADER include/spdk/lvol.h 01:17:23.368 TEST_HEADER include/spdk/md5.h 01:17:23.368 TEST_HEADER include/spdk/memory.h 01:17:23.368 TEST_HEADER include/spdk/mmio.h 01:17:23.368 TEST_HEADER include/spdk/nbd.h 01:17:23.368 TEST_HEADER include/spdk/net.h 01:17:23.368 TEST_HEADER include/spdk/notify.h 01:17:23.368 TEST_HEADER include/spdk/nvme.h 01:17:23.368 TEST_HEADER include/spdk/nvme_intel.h 01:17:23.368 TEST_HEADER include/spdk/nvme_ocssd.h 01:17:23.368 TEST_HEADER include/spdk/nvme_ocssd_spec.h 01:17:23.368 TEST_HEADER include/spdk/nvme_spec.h 01:17:23.368 CC test/dma/test_dma/test_dma.o 01:17:23.368 TEST_HEADER include/spdk/nvme_zns.h 01:17:23.368 TEST_HEADER include/spdk/nvmf_cmd.h 01:17:23.368 TEST_HEADER include/spdk/nvmf_fc_spec.h 01:17:23.368 TEST_HEADER include/spdk/nvmf.h 01:17:23.368 TEST_HEADER include/spdk/nvmf_spec.h 01:17:23.368 TEST_HEADER include/spdk/nvmf_transport.h 01:17:23.368 CC test/app/bdev_svc/bdev_svc.o 01:17:23.368 TEST_HEADER include/spdk/opal.h 01:17:23.368 TEST_HEADER include/spdk/opal_spec.h 01:17:23.368 TEST_HEADER include/spdk/pci_ids.h 01:17:23.627 TEST_HEADER include/spdk/pipe.h 01:17:23.627 TEST_HEADER include/spdk/queue.h 01:17:23.627 TEST_HEADER include/spdk/reduce.h 01:17:23.627 TEST_HEADER include/spdk/rpc.h 01:17:23.627 TEST_HEADER include/spdk/scheduler.h 01:17:23.627 TEST_HEADER include/spdk/scsi.h 01:17:23.627 TEST_HEADER include/spdk/scsi_spec.h 01:17:23.627 TEST_HEADER include/spdk/sock.h 01:17:23.627 TEST_HEADER include/spdk/stdinc.h 01:17:23.627 TEST_HEADER include/spdk/string.h 01:17:23.627 TEST_HEADER include/spdk/thread.h 01:17:23.627 TEST_HEADER include/spdk/trace.h 01:17:23.627 CC test/env/mem_callbacks/mem_callbacks.o 01:17:23.627 TEST_HEADER include/spdk/trace_parser.h 01:17:23.627 TEST_HEADER include/spdk/tree.h 01:17:23.627 TEST_HEADER include/spdk/ublk.h 01:17:23.627 TEST_HEADER include/spdk/util.h 01:17:23.627 TEST_HEADER include/spdk/uuid.h 01:17:23.627 TEST_HEADER include/spdk/version.h 01:17:23.627 TEST_HEADER include/spdk/vfio_user_pci.h 01:17:23.627 TEST_HEADER include/spdk/vfio_user_spec.h 01:17:23.627 TEST_HEADER include/spdk/vhost.h 01:17:23.627 TEST_HEADER include/spdk/vmd.h 01:17:23.627 TEST_HEADER include/spdk/xor.h 01:17:23.627 TEST_HEADER include/spdk/zipf.h 01:17:23.627 CXX test/cpp_headers/accel.o 01:17:23.627 LINK rpc_client_test 01:17:23.627 LINK poller_perf 01:17:23.627 LINK interrupt_tgt 01:17:23.627 LINK zipf 01:17:23.627 LINK ioat_perf 01:17:23.627 LINK bdev_svc 01:17:23.627 CXX test/cpp_headers/accel_module.o 01:17:23.884 CXX test/cpp_headers/assert.o 01:17:23.884 LINK spdk_trace 01:17:23.884 CC examples/ioat/verify/verify.o 01:17:23.884 CC test/event/event_perf/event_perf.o 01:17:23.884 CXX test/cpp_headers/barrier.o 01:17:23.884 CC test/event/reactor/reactor.o 01:17:24.141 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 01:17:24.141 CC test/app/histogram_perf/histogram_perf.o 01:17:24.141 CC examples/thread/thread/thread_ex.o 01:17:24.141 LINK test_dma 01:17:24.141 CC app/trace_record/trace_record.o 01:17:24.141 LINK verify 01:17:24.141 LINK event_perf 01:17:24.141 CXX test/cpp_headers/base64.o 01:17:24.141 LINK reactor 01:17:24.141 LINK mem_callbacks 01:17:24.141 LINK histogram_perf 01:17:24.141 CXX test/cpp_headers/bdev.o 01:17:24.398 CXX test/cpp_headers/bdev_module.o 01:17:24.398 LINK thread 01:17:24.398 CC test/event/reactor_perf/reactor_perf.o 01:17:24.398 CC test/env/vtophys/vtophys.o 01:17:24.398 LINK spdk_trace_record 01:17:24.398 CXX test/cpp_headers/bdev_zone.o 01:17:24.398 CC examples/sock/hello_world/hello_sock.o 01:17:24.655 CC examples/idxd/perf/perf.o 01:17:24.655 LINK nvme_fuzz 01:17:24.655 CC examples/vmd/lsvmd/lsvmd.o 01:17:24.655 LINK reactor_perf 01:17:24.655 CC test/event/app_repeat/app_repeat.o 01:17:24.655 LINK vtophys 01:17:24.655 CC test/event/scheduler/scheduler.o 01:17:24.655 LINK lsvmd 01:17:24.655 CC app/nvmf_tgt/nvmf_main.o 01:17:24.655 CXX test/cpp_headers/bit_array.o 01:17:24.655 LINK app_repeat 01:17:24.911 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 01:17:24.911 LINK hello_sock 01:17:24.911 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:17:24.911 CC app/iscsi_tgt/iscsi_tgt.o 01:17:24.911 CXX test/cpp_headers/bit_pool.o 01:17:24.911 LINK nvmf_tgt 01:17:24.911 CC examples/vmd/led/led.o 01:17:24.911 LINK idxd_perf 01:17:24.911 LINK scheduler 01:17:24.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:17:24.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:17:24.911 LINK env_dpdk_post_init 01:17:25.167 CXX test/cpp_headers/blob_bdev.o 01:17:25.167 LINK iscsi_tgt 01:17:25.167 CXX test/cpp_headers/blobfs_bdev.o 01:17:25.167 LINK led 01:17:25.167 CXX test/cpp_headers/blobfs.o 01:17:25.167 CXX test/cpp_headers/blob.o 01:17:25.424 CC test/env/memory/memory_ut.o 01:17:25.424 CC examples/accel/perf/accel_perf.o 01:17:25.424 CC test/env/pci/pci_ut.o 01:17:25.424 CXX test/cpp_headers/conf.o 01:17:25.424 CC examples/blob/hello_world/hello_blob.o 01:17:25.424 CC app/spdk_tgt/spdk_tgt.o 01:17:25.424 CXX test/cpp_headers/config.o 01:17:25.424 CC examples/nvme/hello_world/hello_world.o 01:17:25.424 LINK vhost_fuzz 01:17:25.679 CXX test/cpp_headers/cpuset.o 01:17:25.679 CXX test/cpp_headers/crc16.o 01:17:25.679 LINK spdk_tgt 01:17:25.679 CC test/accel/dif/dif.o 01:17:25.679 LINK hello_world 01:17:25.679 LINK hello_blob 01:17:25.934 CC examples/blob/cli/blobcli.o 01:17:25.934 LINK pci_ut 01:17:25.934 CXX test/cpp_headers/crc32.o 01:17:25.934 CC app/spdk_lspci/spdk_lspci.o 01:17:25.934 LINK accel_perf 01:17:25.934 CC examples/nvme/reconnect/reconnect.o 01:17:25.934 CC app/spdk_nvme_perf/perf.o 01:17:25.934 CXX test/cpp_headers/crc64.o 01:17:26.191 CXX test/cpp_headers/dif.o 01:17:26.191 LINK spdk_lspci 01:17:26.191 CXX test/cpp_headers/dma.o 01:17:26.448 CC test/blobfs/mkfs/mkfs.o 01:17:26.448 CC examples/fsdev/hello_world/hello_fsdev.o 01:17:26.448 LINK blobcli 01:17:26.448 LINK reconnect 01:17:26.448 CXX test/cpp_headers/endian.o 01:17:26.448 CC examples/bdev/hello_world/hello_bdev.o 01:17:26.706 LINK mkfs 01:17:26.706 LINK dif 01:17:26.706 CXX test/cpp_headers/env_dpdk.o 01:17:26.706 CC examples/nvme/nvme_manage/nvme_manage.o 01:17:26.706 LINK memory_ut 01:17:26.706 CC test/app/jsoncat/jsoncat.o 01:17:26.706 LINK hello_bdev 01:17:26.706 LINK hello_fsdev 01:17:26.706 CXX test/cpp_headers/env.o 01:17:26.964 CC examples/nvme/arbitration/arbitration.o 01:17:26.964 CC examples/nvme/hotplug/hotplug.o 01:17:26.964 LINK jsoncat 01:17:26.964 CXX test/cpp_headers/event.o 01:17:26.964 CC examples/nvme/cmb_copy/cmb_copy.o 01:17:26.964 CC examples/nvme/abort/abort.o 01:17:26.964 LINK spdk_nvme_perf 01:17:27.221 CC examples/bdev/bdevperf/bdevperf.o 01:17:27.221 LINK iscsi_fuzz 01:17:27.221 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:17:27.221 LINK hotplug 01:17:27.221 CXX test/cpp_headers/fd_group.o 01:17:27.221 LINK cmb_copy 01:17:27.221 LINK arbitration 01:17:27.479 CC app/spdk_nvme_identify/identify.o 01:17:27.479 LINK nvme_manage 01:17:27.479 LINK pmr_persistence 01:17:27.479 CXX test/cpp_headers/fd.o 01:17:27.479 CC test/app/stub/stub.o 01:17:27.479 CC app/spdk_nvme_discover/discovery_aer.o 01:17:27.479 CC app/spdk_top/spdk_top.o 01:17:27.479 LINK abort 01:17:27.479 CXX test/cpp_headers/file.o 01:17:27.737 LINK stub 01:17:27.737 CC test/nvme/aer/aer.o 01:17:27.737 CC test/lvol/esnap/esnap.o 01:17:27.737 CXX test/cpp_headers/fsdev.o 01:17:27.737 LINK spdk_nvme_discover 01:17:27.737 CC test/bdev/bdevio/bdevio.o 01:17:27.996 CC test/nvme/reset/reset.o 01:17:27.996 CC test/nvme/sgl/sgl.o 01:17:27.996 CXX test/cpp_headers/fsdev_module.o 01:17:27.996 CC app/vhost/vhost.o 01:17:27.996 LINK aer 01:17:27.996 CXX test/cpp_headers/ftl.o 01:17:28.254 LINK bdevperf 01:17:28.254 LINK reset 01:17:28.254 LINK vhost 01:17:28.254 LINK sgl 01:17:28.254 LINK bdevio 01:17:28.254 CC test/nvme/e2edp/nvme_dp.o 01:17:28.254 CXX test/cpp_headers/fuse_dispatcher.o 01:17:28.512 LINK spdk_nvme_identify 01:17:28.512 CC test/nvme/overhead/overhead.o 01:17:28.512 CC test/nvme/err_injection/err_injection.o 01:17:28.512 CXX test/cpp_headers/gpt_spec.o 01:17:28.512 CC test/nvme/startup/startup.o 01:17:28.512 CC test/nvme/reserve/reserve.o 01:17:28.512 CC examples/nvmf/nvmf/nvmf.o 01:17:28.512 CXX test/cpp_headers/hexlify.o 01:17:28.512 LINK spdk_top 01:17:28.512 LINK nvme_dp 01:17:28.770 LINK err_injection 01:17:28.770 LINK startup 01:17:28.770 CXX test/cpp_headers/histogram_data.o 01:17:28.770 LINK reserve 01:17:28.770 CXX test/cpp_headers/idxd.o 01:17:28.770 LINK overhead 01:17:28.770 CC test/nvme/simple_copy/simple_copy.o 01:17:29.029 LINK nvmf 01:17:29.029 CC app/spdk_dd/spdk_dd.o 01:17:29.029 CXX test/cpp_headers/idxd_spec.o 01:17:29.029 CC test/nvme/boot_partition/boot_partition.o 01:17:29.029 CC test/nvme/connect_stress/connect_stress.o 01:17:29.029 CXX test/cpp_headers/init.o 01:17:29.029 CC test/nvme/fused_ordering/fused_ordering.o 01:17:29.029 CC test/nvme/compliance/nvme_compliance.o 01:17:29.029 CXX test/cpp_headers/ioat.o 01:17:29.029 LINK simple_copy 01:17:29.309 LINK boot_partition 01:17:29.309 CXX test/cpp_headers/ioat_spec.o 01:17:29.309 LINK connect_stress 01:17:29.309 LINK fused_ordering 01:17:29.309 CXX test/cpp_headers/iscsi_spec.o 01:17:29.309 CC app/fio/nvme/fio_plugin.o 01:17:29.309 LINK spdk_dd 01:17:29.309 CC app/fio/bdev/fio_plugin.o 01:17:29.574 CXX test/cpp_headers/json.o 01:17:29.574 CC test/nvme/doorbell_aers/doorbell_aers.o 01:17:29.574 LINK nvme_compliance 01:17:29.574 CC test/nvme/fdp/fdp.o 01:17:29.574 CC test/nvme/cuse/cuse.o 01:17:29.574 CXX test/cpp_headers/jsonrpc.o 01:17:29.574 CXX test/cpp_headers/keyring.o 01:17:29.574 CXX test/cpp_headers/keyring_module.o 01:17:29.574 CXX test/cpp_headers/likely.o 01:17:29.574 LINK doorbell_aers 01:17:29.574 CXX test/cpp_headers/log.o 01:17:29.833 CXX test/cpp_headers/lvol.o 01:17:29.833 CXX test/cpp_headers/md5.o 01:17:29.833 CXX test/cpp_headers/memory.o 01:17:29.833 CXX test/cpp_headers/mmio.o 01:17:29.833 CXX test/cpp_headers/nbd.o 01:17:29.833 LINK fdp 01:17:29.833 CXX test/cpp_headers/net.o 01:17:30.092 CXX test/cpp_headers/notify.o 01:17:30.092 CXX test/cpp_headers/nvme.o 01:17:30.092 CXX test/cpp_headers/nvme_intel.o 01:17:30.092 CXX test/cpp_headers/nvme_ocssd.o 01:17:30.092 LINK spdk_nvme 01:17:30.092 CXX test/cpp_headers/nvme_ocssd_spec.o 01:17:30.092 LINK spdk_bdev 01:17:30.092 CXX test/cpp_headers/nvme_spec.o 01:17:30.092 CXX test/cpp_headers/nvme_zns.o 01:17:30.092 CXX test/cpp_headers/nvmf_cmd.o 01:17:30.092 CXX test/cpp_headers/nvmf_fc_spec.o 01:17:30.092 CXX test/cpp_headers/nvmf.o 01:17:30.092 CXX test/cpp_headers/nvmf_spec.o 01:17:30.350 CXX test/cpp_headers/nvmf_transport.o 01:17:30.350 CXX test/cpp_headers/opal.o 01:17:30.350 CXX test/cpp_headers/opal_spec.o 01:17:30.350 CXX test/cpp_headers/pci_ids.o 01:17:30.350 CXX test/cpp_headers/pipe.o 01:17:30.350 CXX test/cpp_headers/queue.o 01:17:30.350 CXX test/cpp_headers/reduce.o 01:17:30.350 CXX test/cpp_headers/rpc.o 01:17:30.350 CXX test/cpp_headers/scheduler.o 01:17:30.350 CXX test/cpp_headers/scsi.o 01:17:30.350 CXX test/cpp_headers/scsi_spec.o 01:17:30.350 CXX test/cpp_headers/sock.o 01:17:30.609 CXX test/cpp_headers/stdinc.o 01:17:30.609 CXX test/cpp_headers/string.o 01:17:30.609 CXX test/cpp_headers/thread.o 01:17:30.609 CXX test/cpp_headers/trace.o 01:17:30.609 CXX test/cpp_headers/trace_parser.o 01:17:30.609 CXX test/cpp_headers/tree.o 01:17:30.609 CXX test/cpp_headers/ublk.o 01:17:30.609 CXX test/cpp_headers/util.o 01:17:30.609 CXX test/cpp_headers/uuid.o 01:17:30.609 CXX test/cpp_headers/version.o 01:17:30.609 CXX test/cpp_headers/vfio_user_pci.o 01:17:30.868 CXX test/cpp_headers/vfio_user_spec.o 01:17:30.868 CXX test/cpp_headers/vhost.o 01:17:30.868 CXX test/cpp_headers/vmd.o 01:17:30.868 CXX test/cpp_headers/xor.o 01:17:30.868 CXX test/cpp_headers/zipf.o 01:17:31.127 LINK cuse 01:17:34.405 LINK esnap 01:17:34.405 01:17:34.405 real 1m31.999s 01:17:34.405 user 8m33.166s 01:17:34.405 sys 1m42.180s 01:17:34.405 05:12:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:17:34.405 05:12:25 make -- common/autotest_common.sh@10 -- $ set +x 01:17:34.405 ************************************ 01:17:34.405 END TEST make 01:17:34.405 ************************************ 01:17:34.405 05:12:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 01:17:34.405 05:12:25 -- pm/common@29 -- $ signal_monitor_resources TERM 01:17:34.405 05:12:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:17:34.405 05:12:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:17:34.405 05:12:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:17:34.405 05:12:25 -- pm/common@44 -- $ pid=5308 01:17:34.405 05:12:25 -- pm/common@50 -- $ kill -TERM 5308 01:17:34.405 05:12:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:17:34.405 05:12:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:17:34.405 05:12:25 -- pm/common@44 -- $ pid=5309 01:17:34.405 05:12:25 -- pm/common@50 -- $ kill -TERM 5309 01:17:34.405 05:12:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 01:17:34.405 05:12:25 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:17:34.663 05:12:26 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:34.663 05:12:26 -- common/autotest_common.sh@1693 -- # lcov --version 01:17:34.663 05:12:26 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:34.663 05:12:26 -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:34.663 05:12:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:34.663 05:12:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:34.663 05:12:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:34.663 05:12:26 -- scripts/common.sh@336 -- # IFS=.-: 01:17:34.663 05:12:26 -- scripts/common.sh@336 -- # read -ra ver1 01:17:34.663 05:12:26 -- scripts/common.sh@337 -- # IFS=.-: 01:17:34.663 05:12:26 -- scripts/common.sh@337 -- # read -ra ver2 01:17:34.663 05:12:26 -- scripts/common.sh@338 -- # local 'op=<' 01:17:34.663 05:12:26 -- scripts/common.sh@340 -- # ver1_l=2 01:17:34.663 05:12:26 -- scripts/common.sh@341 -- # ver2_l=1 01:17:34.663 05:12:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:34.663 05:12:26 -- scripts/common.sh@344 -- # case "$op" in 01:17:34.663 05:12:26 -- scripts/common.sh@345 -- # : 1 01:17:34.663 05:12:26 -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:34.663 05:12:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:34.663 05:12:26 -- scripts/common.sh@365 -- # decimal 1 01:17:34.663 05:12:26 -- scripts/common.sh@353 -- # local d=1 01:17:34.663 05:12:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:34.663 05:12:26 -- scripts/common.sh@355 -- # echo 1 01:17:34.663 05:12:26 -- scripts/common.sh@365 -- # ver1[v]=1 01:17:34.663 05:12:26 -- scripts/common.sh@366 -- # decimal 2 01:17:34.663 05:12:26 -- scripts/common.sh@353 -- # local d=2 01:17:34.663 05:12:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:34.663 05:12:26 -- scripts/common.sh@355 -- # echo 2 01:17:34.663 05:12:26 -- scripts/common.sh@366 -- # ver2[v]=2 01:17:34.663 05:12:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:34.663 05:12:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:34.663 05:12:26 -- scripts/common.sh@368 -- # return 0 01:17:34.663 05:12:26 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:34.663 05:12:26 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:34.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:34.663 --rc genhtml_branch_coverage=1 01:17:34.663 --rc genhtml_function_coverage=1 01:17:34.663 --rc genhtml_legend=1 01:17:34.663 --rc geninfo_all_blocks=1 01:17:34.663 --rc geninfo_unexecuted_blocks=1 01:17:34.663 01:17:34.663 ' 01:17:34.663 05:12:26 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:34.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:34.663 --rc genhtml_branch_coverage=1 01:17:34.663 --rc genhtml_function_coverage=1 01:17:34.663 --rc genhtml_legend=1 01:17:34.663 --rc geninfo_all_blocks=1 01:17:34.663 --rc geninfo_unexecuted_blocks=1 01:17:34.663 01:17:34.663 ' 01:17:34.663 05:12:26 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:34.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:34.663 --rc genhtml_branch_coverage=1 01:17:34.663 --rc genhtml_function_coverage=1 01:17:34.663 --rc genhtml_legend=1 01:17:34.663 --rc geninfo_all_blocks=1 01:17:34.663 --rc geninfo_unexecuted_blocks=1 01:17:34.663 01:17:34.663 ' 01:17:34.663 05:12:26 -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:34.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:34.663 --rc genhtml_branch_coverage=1 01:17:34.663 --rc genhtml_function_coverage=1 01:17:34.663 --rc genhtml_legend=1 01:17:34.663 --rc geninfo_all_blocks=1 01:17:34.663 --rc geninfo_unexecuted_blocks=1 01:17:34.663 01:17:34.663 ' 01:17:34.663 05:12:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:34.663 05:12:26 -- nvmf/common.sh@7 -- # uname -s 01:17:34.663 05:12:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:34.663 05:12:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:34.663 05:12:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:34.663 05:12:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:34.663 05:12:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:34.663 05:12:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:34.663 05:12:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:34.663 05:12:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:34.663 05:12:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:34.663 05:12:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:34.663 05:12:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806ada1f-4f7d-4439-bb20-849f8d3247b8 01:17:34.663 05:12:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=806ada1f-4f7d-4439-bb20-849f8d3247b8 01:17:34.663 05:12:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:34.663 05:12:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:34.663 05:12:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:17:34.663 05:12:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:34.663 05:12:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:34.663 05:12:26 -- scripts/common.sh@15 -- # shopt -s extglob 01:17:34.663 05:12:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:34.663 05:12:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:34.663 05:12:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:34.663 05:12:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:34.663 05:12:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:34.663 05:12:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:34.663 05:12:26 -- paths/export.sh@5 -- # export PATH 01:17:34.663 05:12:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:34.663 05:12:26 -- nvmf/common.sh@51 -- # : 0 01:17:34.663 05:12:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:34.663 05:12:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:34.663 05:12:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:34.663 05:12:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:34.663 05:12:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:34.663 05:12:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:34.663 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:34.663 05:12:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:34.663 05:12:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:34.663 05:12:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:34.663 05:12:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 01:17:34.663 05:12:26 -- spdk/autotest.sh@32 -- # uname -s 01:17:34.663 05:12:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 01:17:34.663 05:12:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 01:17:34.663 05:12:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 01:17:34.663 05:12:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 01:17:34.663 05:12:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 01:17:34.663 05:12:26 -- spdk/autotest.sh@44 -- # modprobe nbd 01:17:34.663 05:12:26 -- spdk/autotest.sh@46 -- # type -P udevadm 01:17:34.663 05:12:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 01:17:34.663 05:12:26 -- spdk/autotest.sh@48 -- # udevadm_pid=54313 01:17:34.663 05:12:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 01:17:34.663 05:12:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 01:17:34.663 05:12:26 -- pm/common@17 -- # local monitor 01:17:34.663 05:12:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:17:34.663 05:12:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:17:34.663 05:12:26 -- pm/common@25 -- # sleep 1 01:17:34.663 05:12:26 -- pm/common@21 -- # date +%s 01:17:34.663 05:12:26 -- pm/common@21 -- # date +%s 01:17:34.663 05:12:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733721146 01:17:34.663 05:12:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733721146 01:17:34.921 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733721146_collect-vmstat.pm.log 01:17:34.921 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733721146_collect-cpu-load.pm.log 01:17:35.855 05:12:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 01:17:35.855 05:12:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 01:17:35.855 05:12:27 -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:35.855 05:12:27 -- common/autotest_common.sh@10 -- # set +x 01:17:35.855 05:12:27 -- spdk/autotest.sh@59 -- # create_test_list 01:17:35.855 05:12:27 -- common/autotest_common.sh@752 -- # xtrace_disable 01:17:35.855 05:12:27 -- common/autotest_common.sh@10 -- # set +x 01:17:35.855 05:12:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 01:17:35.855 05:12:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 01:17:35.855 05:12:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 01:17:35.855 05:12:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 01:17:35.855 05:12:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 01:17:35.855 05:12:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 01:17:35.855 05:12:27 -- common/autotest_common.sh@1457 -- # uname 01:17:35.855 05:12:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 01:17:35.855 05:12:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 01:17:35.855 05:12:27 -- common/autotest_common.sh@1477 -- # uname 01:17:35.855 05:12:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 01:17:35.855 05:12:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 01:17:35.855 05:12:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 01:17:35.855 lcov: LCOV version 1.15 01:17:35.855 05:12:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 01:17:50.734 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 01:17:50.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 01:18:08.819 05:12:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 01:18:08.819 05:12:57 -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:08.819 05:12:57 -- common/autotest_common.sh@10 -- # set +x 01:18:08.819 05:12:57 -- spdk/autotest.sh@78 -- # rm -f 01:18:08.819 05:12:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:18:08.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:08.819 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:18:08.819 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:18:08.819 05:12:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 01:18:08.819 05:12:58 -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:18:08.819 05:12:58 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:18:08.819 05:12:58 -- common/autotest_common.sh@1658 -- # local nvme bdf 01:18:08.819 05:12:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:08.819 05:12:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:18:08.819 05:12:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:18:08.819 05:12:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:18:08.819 05:12:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:08.819 05:12:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:08.819 05:12:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:18:08.819 05:12:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:18:08.819 05:12:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:18:08.819 05:12:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:08.819 05:12:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:08.819 05:12:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 01:18:08.819 05:12:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 01:18:08.819 05:12:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 01:18:08.820 05:12:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:08.820 05:12:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:18:08.820 05:12:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 01:18:08.820 05:12:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 01:18:08.820 05:12:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 01:18:08.820 05:12:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:18:08.820 05:12:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 01:18:08.820 05:12:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:18:08.820 05:12:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:18:08.820 05:12:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 01:18:08.820 05:12:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:18:08.820 05:12:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:18:08.820 No valid GPT data, bailing 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # pt= 01:18:08.820 05:12:58 -- scripts/common.sh@395 -- # return 1 01:18:08.820 05:12:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 01:18:08.820 1+0 records in 01:18:08.820 1+0 records out 01:18:08.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416776 s, 252 MB/s 01:18:08.820 05:12:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:18:08.820 05:12:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:18:08.820 05:12:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 01:18:08.820 05:12:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 01:18:08.820 05:12:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 01:18:08.820 No valid GPT data, bailing 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # pt= 01:18:08.820 05:12:58 -- scripts/common.sh@395 -- # return 1 01:18:08.820 05:12:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 01:18:08.820 1+0 records in 01:18:08.820 1+0 records out 01:18:08.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483447 s, 217 MB/s 01:18:08.820 05:12:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:18:08.820 05:12:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:18:08.820 05:12:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 01:18:08.820 05:12:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 01:18:08.820 05:12:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 01:18:08.820 No valid GPT data, bailing 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # pt= 01:18:08.820 05:12:58 -- scripts/common.sh@395 -- # return 1 01:18:08.820 05:12:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 01:18:08.820 1+0 records in 01:18:08.820 1+0 records out 01:18:08.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483408 s, 217 MB/s 01:18:08.820 05:12:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:18:08.820 05:12:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:18:08.820 05:12:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 01:18:08.820 05:12:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 01:18:08.820 05:12:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 01:18:08.820 No valid GPT data, bailing 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 01:18:08.820 05:12:58 -- scripts/common.sh@394 -- # pt= 01:18:08.820 05:12:58 -- scripts/common.sh@395 -- # return 1 01:18:08.820 05:12:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 01:18:08.820 1+0 records in 01:18:08.820 1+0 records out 01:18:08.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486218 s, 216 MB/s 01:18:08.820 05:12:58 -- spdk/autotest.sh@105 -- # sync 01:18:08.820 05:12:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 01:18:08.820 05:12:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 01:18:08.820 05:12:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 01:18:09.387 05:13:00 -- spdk/autotest.sh@111 -- # uname -s 01:18:09.387 05:13:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 01:18:09.387 05:13:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 01:18:09.387 05:13:00 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:18:09.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:09.955 Hugepages 01:18:09.955 node hugesize free / total 01:18:09.955 node0 1048576kB 0 / 0 01:18:09.955 node0 2048kB 0 / 0 01:18:09.955 01:18:09.955 Type BDF Vendor Device NUMA Driver Device Block devices 01:18:10.214 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:18:10.214 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:18:10.214 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 01:18:10.214 05:13:01 -- spdk/autotest.sh@117 -- # uname -s 01:18:10.214 05:13:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 01:18:10.214 05:13:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 01:18:10.214 05:13:01 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:18:11.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:11.144 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:18:11.144 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:18:11.144 05:13:02 -- common/autotest_common.sh@1517 -- # sleep 1 01:18:12.516 05:13:03 -- common/autotest_common.sh@1518 -- # bdfs=() 01:18:12.516 05:13:03 -- common/autotest_common.sh@1518 -- # local bdfs 01:18:12.516 05:13:03 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 01:18:12.516 05:13:03 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 01:18:12.516 05:13:03 -- common/autotest_common.sh@1498 -- # bdfs=() 01:18:12.516 05:13:03 -- common/autotest_common.sh@1498 -- # local bdfs 01:18:12.516 05:13:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:18:12.516 05:13:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:18:12.516 05:13:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:18:12.516 05:13:03 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:18:12.516 05:13:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:18:12.516 05:13:03 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:18:12.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:12.773 Waiting for block devices as requested 01:18:12.773 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:18:12.773 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:18:13.031 05:13:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:18:13.031 05:13:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 01:18:13.031 05:13:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # grep oacs 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:18:13.031 05:13:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:18:13.031 05:13:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:18:13.031 05:13:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1543 -- # continue 01:18:13.031 05:13:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:18:13.031 05:13:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:18:13.031 05:13:04 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 01:18:13.031 05:13:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # grep oacs 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:18:13.031 05:13:04 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:18:13.031 05:13:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:18:13.031 05:13:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:18:13.031 05:13:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:18:13.031 05:13:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:18:13.031 05:13:04 -- common/autotest_common.sh@1543 -- # continue 01:18:13.031 05:13:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 01:18:13.031 05:13:04 -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:13.031 05:13:04 -- common/autotest_common.sh@10 -- # set +x 01:18:13.031 05:13:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 01:18:13.031 05:13:04 -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:13.031 05:13:04 -- common/autotest_common.sh@10 -- # set +x 01:18:13.031 05:13:04 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:18:13.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:18:13.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:18:13.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:18:13.853 05:13:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 01:18:13.853 05:13:05 -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:13.853 05:13:05 -- common/autotest_common.sh@10 -- # set +x 01:18:14.111 05:13:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 01:18:14.111 05:13:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 01:18:14.111 05:13:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 01:18:14.111 05:13:05 -- common/autotest_common.sh@1563 -- # bdfs=() 01:18:14.111 05:13:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 01:18:14.111 05:13:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 01:18:14.111 05:13:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 01:18:14.111 05:13:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 01:18:14.111 05:13:05 -- common/autotest_common.sh@1498 -- # bdfs=() 01:18:14.111 05:13:05 -- common/autotest_common.sh@1498 -- # local bdfs 01:18:14.111 05:13:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:18:14.111 05:13:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:18:14.111 05:13:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:18:14.111 05:13:05 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:18:14.111 05:13:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:18:14.111 05:13:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:18:14.111 05:13:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:18:14.111 05:13:05 -- common/autotest_common.sh@1566 -- # device=0x0010 01:18:14.111 05:13:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:18:14.111 05:13:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:18:14.111 05:13:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:18:14.111 05:13:05 -- common/autotest_common.sh@1566 -- # device=0x0010 01:18:14.111 05:13:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:18:14.111 05:13:05 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 01:18:14.111 05:13:05 -- common/autotest_common.sh@1572 -- # return 0 01:18:14.111 05:13:05 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 01:18:14.111 05:13:05 -- common/autotest_common.sh@1580 -- # return 0 01:18:14.111 05:13:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 01:18:14.111 05:13:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 01:18:14.111 05:13:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:18:14.111 05:13:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:18:14.111 05:13:05 -- spdk/autotest.sh@149 -- # timing_enter lib 01:18:14.111 05:13:05 -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:14.111 05:13:05 -- common/autotest_common.sh@10 -- # set +x 01:18:14.111 05:13:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 01:18:14.111 05:13:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:18:14.111 05:13:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:14.111 05:13:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:14.111 05:13:05 -- common/autotest_common.sh@10 -- # set +x 01:18:14.111 ************************************ 01:18:14.111 START TEST env 01:18:14.111 ************************************ 01:18:14.111 05:13:05 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:18:14.111 * Looking for test storage... 01:18:14.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:18:14.111 05:13:05 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:14.111 05:13:05 env -- common/autotest_common.sh@1693 -- # lcov --version 01:18:14.111 05:13:05 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:14.370 05:13:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:14.370 05:13:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:14.370 05:13:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:14.370 05:13:05 env -- scripts/common.sh@336 -- # IFS=.-: 01:18:14.370 05:13:05 env -- scripts/common.sh@336 -- # read -ra ver1 01:18:14.370 05:13:05 env -- scripts/common.sh@337 -- # IFS=.-: 01:18:14.370 05:13:05 env -- scripts/common.sh@337 -- # read -ra ver2 01:18:14.370 05:13:05 env -- scripts/common.sh@338 -- # local 'op=<' 01:18:14.370 05:13:05 env -- scripts/common.sh@340 -- # ver1_l=2 01:18:14.370 05:13:05 env -- scripts/common.sh@341 -- # ver2_l=1 01:18:14.370 05:13:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:14.370 05:13:05 env -- scripts/common.sh@344 -- # case "$op" in 01:18:14.370 05:13:05 env -- scripts/common.sh@345 -- # : 1 01:18:14.370 05:13:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:14.370 05:13:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:14.370 05:13:05 env -- scripts/common.sh@365 -- # decimal 1 01:18:14.370 05:13:05 env -- scripts/common.sh@353 -- # local d=1 01:18:14.370 05:13:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:14.370 05:13:05 env -- scripts/common.sh@355 -- # echo 1 01:18:14.370 05:13:05 env -- scripts/common.sh@365 -- # ver1[v]=1 01:18:14.370 05:13:05 env -- scripts/common.sh@366 -- # decimal 2 01:18:14.370 05:13:05 env -- scripts/common.sh@353 -- # local d=2 01:18:14.370 05:13:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:14.370 05:13:05 env -- scripts/common.sh@355 -- # echo 2 01:18:14.370 05:13:05 env -- scripts/common.sh@366 -- # ver2[v]=2 01:18:14.370 05:13:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:14.370 05:13:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:14.370 05:13:05 env -- scripts/common.sh@368 -- # return 0 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:14.370 --rc genhtml_branch_coverage=1 01:18:14.370 --rc genhtml_function_coverage=1 01:18:14.370 --rc genhtml_legend=1 01:18:14.370 --rc geninfo_all_blocks=1 01:18:14.370 --rc geninfo_unexecuted_blocks=1 01:18:14.370 01:18:14.370 ' 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:14.370 --rc genhtml_branch_coverage=1 01:18:14.370 --rc genhtml_function_coverage=1 01:18:14.370 --rc genhtml_legend=1 01:18:14.370 --rc geninfo_all_blocks=1 01:18:14.370 --rc geninfo_unexecuted_blocks=1 01:18:14.370 01:18:14.370 ' 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:14.370 --rc genhtml_branch_coverage=1 01:18:14.370 --rc genhtml_function_coverage=1 01:18:14.370 --rc genhtml_legend=1 01:18:14.370 --rc geninfo_all_blocks=1 01:18:14.370 --rc geninfo_unexecuted_blocks=1 01:18:14.370 01:18:14.370 ' 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:14.370 --rc genhtml_branch_coverage=1 01:18:14.370 --rc genhtml_function_coverage=1 01:18:14.370 --rc genhtml_legend=1 01:18:14.370 --rc geninfo_all_blocks=1 01:18:14.370 --rc geninfo_unexecuted_blocks=1 01:18:14.370 01:18:14.370 ' 01:18:14.370 05:13:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:14.370 05:13:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:14.370 05:13:05 env -- common/autotest_common.sh@10 -- # set +x 01:18:14.370 ************************************ 01:18:14.370 START TEST env_memory 01:18:14.370 ************************************ 01:18:14.370 05:13:05 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:18:14.370 01:18:14.370 01:18:14.370 CUnit - A unit testing framework for C - Version 2.1-3 01:18:14.370 http://cunit.sourceforge.net/ 01:18:14.370 01:18:14.370 01:18:14.370 Suite: memory 01:18:14.370 Test: alloc and free memory map ...[2024-12-09 05:13:05.871752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 284:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:18:14.370 passed 01:18:14.370 Test: mem map translation ...[2024-12-09 05:13:05.934882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:18:14.370 [2024-12-09 05:13:05.935168] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:18:14.370 [2024-12-09 05:13:05.935583] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:18:14.370 [2024-12-09 05:13:05.935789] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 606:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:18:14.629 passed 01:18:14.629 Test: mem map registration ...[2024-12-09 05:13:06.035921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 01:18:14.629 [2024-12-09 05:13:06.036192] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 01:18:14.629 passed 01:18:14.629 Test: mem map adjacent registrations ...passed 01:18:14.629 01:18:14.629 Run Summary: Type Total Ran Passed Failed Inactive 01:18:14.629 suites 1 1 n/a 0 0 01:18:14.629 tests 4 4 4 0 0 01:18:14.629 asserts 152 152 152 0 n/a 01:18:14.629 01:18:14.629 Elapsed time = 0.340 seconds 01:18:14.629 01:18:14.629 real 0m0.388s 01:18:14.629 user 0m0.344s 01:18:14.629 sys 0m0.032s 01:18:14.629 05:13:06 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:14.629 05:13:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:18:14.629 ************************************ 01:18:14.629 END TEST env_memory 01:18:14.629 ************************************ 01:18:14.629 05:13:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:18:14.629 05:13:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:14.629 05:13:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:14.629 05:13:06 env -- common/autotest_common.sh@10 -- # set +x 01:18:14.629 ************************************ 01:18:14.629 START TEST env_vtophys 01:18:14.629 ************************************ 01:18:14.629 05:13:06 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:18:14.888 EAL: lib.eal log level changed from notice to debug 01:18:14.888 EAL: Detected lcore 0 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 1 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 2 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 3 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 4 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 5 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 6 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 7 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 8 as core 0 on socket 0 01:18:14.888 EAL: Detected lcore 9 as core 0 on socket 0 01:18:14.888 EAL: Maximum logical cores by configuration: 128 01:18:14.888 EAL: Detected CPU lcores: 10 01:18:14.888 EAL: Detected NUMA nodes: 1 01:18:14.888 EAL: Checking presence of .so 'librte_eal.so.24.1' 01:18:14.888 EAL: Detected shared linkage of DPDK 01:18:14.888 EAL: No shared files mode enabled, IPC will be disabled 01:18:14.888 EAL: Selected IOVA mode 'PA' 01:18:14.888 EAL: Probing VFIO support... 01:18:14.888 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:18:14.888 EAL: VFIO modules not loaded, skipping VFIO support... 01:18:14.888 EAL: Ask a virtual area of 0x2e000 bytes 01:18:14.888 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:18:14.888 EAL: Setting up physically contiguous memory... 01:18:14.888 EAL: Setting maximum number of open files to 524288 01:18:14.888 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:18:14.888 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:18:14.888 EAL: Ask a virtual area of 0x61000 bytes 01:18:14.888 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:18:14.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:18:14.888 EAL: Ask a virtual area of 0x400000000 bytes 01:18:14.888 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:18:14.888 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:18:14.888 EAL: Ask a virtual area of 0x61000 bytes 01:18:14.888 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:18:14.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:18:14.888 EAL: Ask a virtual area of 0x400000000 bytes 01:18:14.888 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:18:14.888 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:18:14.888 EAL: Ask a virtual area of 0x61000 bytes 01:18:14.888 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:18:14.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:18:14.888 EAL: Ask a virtual area of 0x400000000 bytes 01:18:14.888 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:18:14.888 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:18:14.888 EAL: Ask a virtual area of 0x61000 bytes 01:18:14.888 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:18:14.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:18:14.888 EAL: Ask a virtual area of 0x400000000 bytes 01:18:14.888 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:18:14.888 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:18:14.888 EAL: Hugepages will be freed exactly as allocated. 01:18:14.888 EAL: No shared files mode enabled, IPC is disabled 01:18:14.888 EAL: No shared files mode enabled, IPC is disabled 01:18:14.888 EAL: TSC frequency is ~2200000 KHz 01:18:14.888 EAL: Main lcore 0 is ready (tid=7ff07d0f8a40;cpuset=[0]) 01:18:14.888 EAL: Trying to obtain current memory policy. 01:18:14.888 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:14.888 EAL: Restoring previous memory policy: 0 01:18:14.888 EAL: request: mp_malloc_sync 01:18:14.888 EAL: No shared files mode enabled, IPC is disabled 01:18:14.888 EAL: Heap on socket 0 was expanded by 2MB 01:18:14.888 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:18:14.888 EAL: No PCI address specified using 'addr=' in: bus=pci 01:18:14.888 EAL: Mem event callback 'spdk:(nil)' registered 01:18:14.888 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:18:14.888 01:18:14.888 01:18:14.888 CUnit - A unit testing framework for C - Version 2.1-3 01:18:14.888 http://cunit.sourceforge.net/ 01:18:14.888 01:18:14.888 01:18:14.888 Suite: components_suite 01:18:15.454 Test: vtophys_malloc_test ...passed 01:18:15.454 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:18:15.454 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.454 EAL: Restoring previous memory policy: 4 01:18:15.454 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.454 EAL: request: mp_malloc_sync 01:18:15.454 EAL: No shared files mode enabled, IPC is disabled 01:18:15.454 EAL: Heap on socket 0 was expanded by 4MB 01:18:15.454 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.454 EAL: request: mp_malloc_sync 01:18:15.454 EAL: No shared files mode enabled, IPC is disabled 01:18:15.454 EAL: Heap on socket 0 was shrunk by 4MB 01:18:15.454 EAL: Trying to obtain current memory policy. 01:18:15.454 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.454 EAL: Restoring previous memory policy: 4 01:18:15.454 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.454 EAL: request: mp_malloc_sync 01:18:15.454 EAL: No shared files mode enabled, IPC is disabled 01:18:15.454 EAL: Heap on socket 0 was expanded by 6MB 01:18:15.454 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.454 EAL: request: mp_malloc_sync 01:18:15.454 EAL: No shared files mode enabled, IPC is disabled 01:18:15.454 EAL: Heap on socket 0 was shrunk by 6MB 01:18:15.454 EAL: Trying to obtain current memory policy. 01:18:15.454 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.454 EAL: Restoring previous memory policy: 4 01:18:15.454 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.454 EAL: request: mp_malloc_sync 01:18:15.454 EAL: No shared files mode enabled, IPC is disabled 01:18:15.454 EAL: Heap on socket 0 was expanded by 10MB 01:18:15.454 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.454 EAL: request: mp_malloc_sync 01:18:15.454 EAL: No shared files mode enabled, IPC is disabled 01:18:15.454 EAL: Heap on socket 0 was shrunk by 10MB 01:18:15.454 EAL: Trying to obtain current memory policy. 01:18:15.455 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.455 EAL: Restoring previous memory policy: 4 01:18:15.455 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.455 EAL: request: mp_malloc_sync 01:18:15.455 EAL: No shared files mode enabled, IPC is disabled 01:18:15.455 EAL: Heap on socket 0 was expanded by 18MB 01:18:15.455 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.455 EAL: request: mp_malloc_sync 01:18:15.455 EAL: No shared files mode enabled, IPC is disabled 01:18:15.455 EAL: Heap on socket 0 was shrunk by 18MB 01:18:15.455 EAL: Trying to obtain current memory policy. 01:18:15.455 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.455 EAL: Restoring previous memory policy: 4 01:18:15.455 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.455 EAL: request: mp_malloc_sync 01:18:15.455 EAL: No shared files mode enabled, IPC is disabled 01:18:15.455 EAL: Heap on socket 0 was expanded by 34MB 01:18:15.455 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.455 EAL: request: mp_malloc_sync 01:18:15.455 EAL: No shared files mode enabled, IPC is disabled 01:18:15.455 EAL: Heap on socket 0 was shrunk by 34MB 01:18:15.455 EAL: Trying to obtain current memory policy. 01:18:15.455 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.713 EAL: Restoring previous memory policy: 4 01:18:15.713 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.713 EAL: request: mp_malloc_sync 01:18:15.713 EAL: No shared files mode enabled, IPC is disabled 01:18:15.713 EAL: Heap on socket 0 was expanded by 66MB 01:18:15.713 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.713 EAL: request: mp_malloc_sync 01:18:15.713 EAL: No shared files mode enabled, IPC is disabled 01:18:15.713 EAL: Heap on socket 0 was shrunk by 66MB 01:18:15.713 EAL: Trying to obtain current memory policy. 01:18:15.713 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:15.713 EAL: Restoring previous memory policy: 4 01:18:15.713 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.713 EAL: request: mp_malloc_sync 01:18:15.713 EAL: No shared files mode enabled, IPC is disabled 01:18:15.713 EAL: Heap on socket 0 was expanded by 130MB 01:18:15.972 EAL: Calling mem event callback 'spdk:(nil)' 01:18:15.972 EAL: request: mp_malloc_sync 01:18:15.972 EAL: No shared files mode enabled, IPC is disabled 01:18:15.972 EAL: Heap on socket 0 was shrunk by 130MB 01:18:16.231 EAL: Trying to obtain current memory policy. 01:18:16.231 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:16.231 EAL: Restoring previous memory policy: 4 01:18:16.231 EAL: Calling mem event callback 'spdk:(nil)' 01:18:16.231 EAL: request: mp_malloc_sync 01:18:16.231 EAL: No shared files mode enabled, IPC is disabled 01:18:16.231 EAL: Heap on socket 0 was expanded by 258MB 01:18:16.489 EAL: Calling mem event callback 'spdk:(nil)' 01:18:16.489 EAL: request: mp_malloc_sync 01:18:16.489 EAL: No shared files mode enabled, IPC is disabled 01:18:16.489 EAL: Heap on socket 0 was shrunk by 258MB 01:18:16.748 EAL: Trying to obtain current memory policy. 01:18:16.748 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:17.008 EAL: Restoring previous memory policy: 4 01:18:17.008 EAL: Calling mem event callback 'spdk:(nil)' 01:18:17.008 EAL: request: mp_malloc_sync 01:18:17.008 EAL: No shared files mode enabled, IPC is disabled 01:18:17.008 EAL: Heap on socket 0 was expanded by 514MB 01:18:17.576 EAL: Calling mem event callback 'spdk:(nil)' 01:18:17.836 EAL: request: mp_malloc_sync 01:18:17.836 EAL: No shared files mode enabled, IPC is disabled 01:18:17.836 EAL: Heap on socket 0 was shrunk by 514MB 01:18:18.404 EAL: Trying to obtain current memory policy. 01:18:18.404 EAL: Setting policy MPOL_PREFERRED for socket 0 01:18:18.662 EAL: Restoring previous memory policy: 4 01:18:18.662 EAL: Calling mem event callback 'spdk:(nil)' 01:18:18.662 EAL: request: mp_malloc_sync 01:18:18.662 EAL: No shared files mode enabled, IPC is disabled 01:18:18.662 EAL: Heap on socket 0 was expanded by 1026MB 01:18:20.036 EAL: Calling mem event callback 'spdk:(nil)' 01:18:20.294 EAL: request: mp_malloc_sync 01:18:20.294 EAL: No shared files mode enabled, IPC is disabled 01:18:20.294 EAL: Heap on socket 0 was shrunk by 1026MB 01:18:21.668 passed 01:18:21.668 01:18:21.668 Run Summary: Type Total Ran Passed Failed Inactive 01:18:21.668 suites 1 1 n/a 0 0 01:18:21.668 tests 2 2 2 0 0 01:18:21.668 asserts 5705 5705 5705 0 n/a 01:18:21.668 01:18:21.668 Elapsed time = 6.412 seconds 01:18:21.668 EAL: Calling mem event callback 'spdk:(nil)' 01:18:21.668 EAL: request: mp_malloc_sync 01:18:21.668 EAL: No shared files mode enabled, IPC is disabled 01:18:21.668 EAL: Heap on socket 0 was shrunk by 2MB 01:18:21.668 EAL: No shared files mode enabled, IPC is disabled 01:18:21.668 EAL: No shared files mode enabled, IPC is disabled 01:18:21.668 EAL: No shared files mode enabled, IPC is disabled 01:18:21.668 01:18:21.668 real 0m6.762s 01:18:21.668 user 0m5.665s 01:18:21.668 sys 0m0.929s 01:18:21.668 05:13:12 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:21.668 ************************************ 01:18:21.668 END TEST env_vtophys 01:18:21.668 ************************************ 01:18:21.668 05:13:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:18:21.668 05:13:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:18:21.668 05:13:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:21.668 05:13:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:21.668 05:13:13 env -- common/autotest_common.sh@10 -- # set +x 01:18:21.668 ************************************ 01:18:21.668 START TEST env_pci 01:18:21.668 ************************************ 01:18:21.668 05:13:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:18:21.668 01:18:21.668 01:18:21.668 CUnit - A unit testing framework for C - Version 2.1-3 01:18:21.668 http://cunit.sourceforge.net/ 01:18:21.668 01:18:21.668 01:18:21.668 Suite: pci 01:18:21.668 Test: pci_hook ...[2024-12-09 05:13:13.080027] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56597 has claimed it 01:18:21.668 passed 01:18:21.668 01:18:21.668 Run Summary: Type Total Ran Passed Failed Inactive 01:18:21.668 suites 1 1 n/a 0 0 01:18:21.668 tests 1 1 1 0 0 01:18:21.668 asserts 25 25 25 0 n/a 01:18:21.668 01:18:21.668 Elapsed time = 0.007 seconds 01:18:21.668 EAL: Cannot find device (10000:00:01.0) 01:18:21.668 EAL: Failed to attach device on primary process 01:18:21.668 ************************************ 01:18:21.668 END TEST env_pci 01:18:21.668 ************************************ 01:18:21.668 01:18:21.668 real 0m0.077s 01:18:21.668 user 0m0.038s 01:18:21.668 sys 0m0.038s 01:18:21.668 05:13:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:21.668 05:13:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:18:21.668 05:13:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:18:21.668 05:13:13 env -- env/env.sh@15 -- # uname 01:18:21.668 05:13:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:18:21.668 05:13:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:18:21.668 05:13:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:18:21.668 05:13:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:18:21.668 05:13:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:21.668 05:13:13 env -- common/autotest_common.sh@10 -- # set +x 01:18:21.668 ************************************ 01:18:21.668 START TEST env_dpdk_post_init 01:18:21.668 ************************************ 01:18:21.668 05:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:18:21.668 EAL: Detected CPU lcores: 10 01:18:21.668 EAL: Detected NUMA nodes: 1 01:18:21.668 EAL: Detected shared linkage of DPDK 01:18:21.926 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:18:21.927 EAL: Selected IOVA mode 'PA' 01:18:21.927 TELEMETRY: No legacy callbacks, legacy socket not created 01:18:21.927 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:18:21.927 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:18:21.927 Starting DPDK initialization... 01:18:21.927 Starting SPDK post initialization... 01:18:21.927 SPDK NVMe probe 01:18:21.927 Attaching to 0000:00:10.0 01:18:21.927 Attaching to 0000:00:11.0 01:18:21.927 Attached to 0000:00:10.0 01:18:21.927 Attached to 0000:00:11.0 01:18:21.927 Cleaning up... 01:18:21.927 01:18:21.927 real 0m0.304s 01:18:21.927 user 0m0.090s 01:18:21.927 sys 0m0.112s 01:18:21.927 05:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:21.927 05:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:18:21.927 ************************************ 01:18:21.927 END TEST env_dpdk_post_init 01:18:21.927 ************************************ 01:18:21.927 05:13:13 env -- env/env.sh@26 -- # uname 01:18:21.927 05:13:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:18:21.927 05:13:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:18:21.927 05:13:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:21.927 05:13:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:21.927 05:13:13 env -- common/autotest_common.sh@10 -- # set +x 01:18:22.184 ************************************ 01:18:22.184 START TEST env_mem_callbacks 01:18:22.184 ************************************ 01:18:22.184 05:13:13 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:18:22.184 EAL: Detected CPU lcores: 10 01:18:22.184 EAL: Detected NUMA nodes: 1 01:18:22.184 EAL: Detected shared linkage of DPDK 01:18:22.184 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:18:22.184 EAL: Selected IOVA mode 'PA' 01:18:22.184 TELEMETRY: No legacy callbacks, legacy socket not created 01:18:22.184 01:18:22.184 01:18:22.184 CUnit - A unit testing framework for C - Version 2.1-3 01:18:22.184 http://cunit.sourceforge.net/ 01:18:22.184 01:18:22.184 01:18:22.184 Suite: memory 01:18:22.184 Test: test ... 01:18:22.184 register 0x200000200000 2097152 01:18:22.184 malloc 3145728 01:18:22.184 register 0x200000400000 4194304 01:18:22.184 buf 0x2000004fffc0 len 3145728 PASSED 01:18:22.184 malloc 64 01:18:22.184 buf 0x2000004ffec0 len 64 PASSED 01:18:22.184 malloc 4194304 01:18:22.184 register 0x200000800000 6291456 01:18:22.184 buf 0x2000009fffc0 len 4194304 PASSED 01:18:22.184 free 0x2000004fffc0 3145728 01:18:22.184 free 0x2000004ffec0 64 01:18:22.184 unregister 0x200000400000 4194304 PASSED 01:18:22.184 free 0x2000009fffc0 4194304 01:18:22.184 unregister 0x200000800000 6291456 PASSED 01:18:22.184 malloc 8388608 01:18:22.184 register 0x200000400000 10485760 01:18:22.184 buf 0x2000005fffc0 len 8388608 PASSED 01:18:22.184 free 0x2000005fffc0 8388608 01:18:22.184 unregister 0x200000400000 10485760 PASSED 01:18:22.184 passed 01:18:22.184 01:18:22.184 Run Summary: Type Total Ran Passed Failed Inactive 01:18:22.184 suites 1 1 n/a 0 0 01:18:22.184 tests 1 1 1 0 0 01:18:22.184 asserts 15 15 15 0 n/a 01:18:22.184 01:18:22.184 Elapsed time = 0.057 seconds 01:18:22.442 01:18:22.442 real 0m0.269s 01:18:22.442 user 0m0.088s 01:18:22.442 sys 0m0.077s 01:18:22.442 05:13:13 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:22.442 ************************************ 01:18:22.442 END TEST env_mem_callbacks 01:18:22.442 ************************************ 01:18:22.442 05:13:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:18:22.442 01:18:22.442 real 0m8.285s 01:18:22.442 user 0m6.432s 01:18:22.442 sys 0m1.446s 01:18:22.442 05:13:13 env -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:22.442 05:13:13 env -- common/autotest_common.sh@10 -- # set +x 01:18:22.442 ************************************ 01:18:22.442 END TEST env 01:18:22.442 ************************************ 01:18:22.442 05:13:13 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:18:22.442 05:13:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:22.442 05:13:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:22.442 05:13:13 -- common/autotest_common.sh@10 -- # set +x 01:18:22.442 ************************************ 01:18:22.442 START TEST rpc 01:18:22.442 ************************************ 01:18:22.442 05:13:13 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:18:22.442 * Looking for test storage... 01:18:22.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:18:22.442 05:13:13 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:22.442 05:13:13 rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:18:22.442 05:13:13 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:22.700 05:13:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:22.700 05:13:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 01:18:22.700 05:13:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 01:18:22.700 05:13:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 01:18:22.700 05:13:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:22.700 05:13:14 rpc -- scripts/common.sh@344 -- # case "$op" in 01:18:22.700 05:13:14 rpc -- scripts/common.sh@345 -- # : 1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:22.700 05:13:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:22.700 05:13:14 rpc -- scripts/common.sh@365 -- # decimal 1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@353 -- # local d=1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:22.700 05:13:14 rpc -- scripts/common.sh@355 -- # echo 1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:18:22.700 05:13:14 rpc -- scripts/common.sh@366 -- # decimal 2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@353 -- # local d=2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:22.700 05:13:14 rpc -- scripts/common.sh@355 -- # echo 2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:18:22.700 05:13:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:22.700 05:13:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:22.700 05:13:14 rpc -- scripts/common.sh@368 -- # return 0 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:22.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:22.700 --rc genhtml_branch_coverage=1 01:18:22.700 --rc genhtml_function_coverage=1 01:18:22.700 --rc genhtml_legend=1 01:18:22.700 --rc geninfo_all_blocks=1 01:18:22.700 --rc geninfo_unexecuted_blocks=1 01:18:22.700 01:18:22.700 ' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:22.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:22.700 --rc genhtml_branch_coverage=1 01:18:22.700 --rc genhtml_function_coverage=1 01:18:22.700 --rc genhtml_legend=1 01:18:22.700 --rc geninfo_all_blocks=1 01:18:22.700 --rc geninfo_unexecuted_blocks=1 01:18:22.700 01:18:22.700 ' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:22.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:22.700 --rc genhtml_branch_coverage=1 01:18:22.700 --rc genhtml_function_coverage=1 01:18:22.700 --rc genhtml_legend=1 01:18:22.700 --rc geninfo_all_blocks=1 01:18:22.700 --rc geninfo_unexecuted_blocks=1 01:18:22.700 01:18:22.700 ' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:22.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:22.700 --rc genhtml_branch_coverage=1 01:18:22.700 --rc genhtml_function_coverage=1 01:18:22.700 --rc genhtml_legend=1 01:18:22.700 --rc geninfo_all_blocks=1 01:18:22.700 --rc geninfo_unexecuted_blocks=1 01:18:22.700 01:18:22.700 ' 01:18:22.700 05:13:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56724 01:18:22.700 05:13:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:18:22.700 05:13:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56724 01:18:22.700 05:13:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 56724 ']' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:22.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:22.700 05:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 01:18:22.700 [2024-12-09 05:13:14.242063] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:22.700 [2024-12-09 05:13:14.242267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56724 ] 01:18:22.958 [2024-12-09 05:13:14.430607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:22.958 [2024-12-09 05:13:14.563286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:18:22.958 [2024-12-09 05:13:14.563404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56724' to capture a snapshot of events at runtime. 01:18:22.958 [2024-12-09 05:13:14.563422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:22.958 [2024-12-09 05:13:14.563436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:22.958 [2024-12-09 05:13:14.563447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56724 for offline analysis/debug. 01:18:22.958 [2024-12-09 05:13:14.564777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:23.890 05:13:15 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:23.890 05:13:15 rpc -- common/autotest_common.sh@868 -- # return 0 01:18:23.890 05:13:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:18:23.890 05:13:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:18:23.890 05:13:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:18:23.890 05:13:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:18:23.890 05:13:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:23.890 05:13:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:23.890 05:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 01:18:23.890 ************************************ 01:18:23.890 START TEST rpc_integrity 01:18:23.890 ************************************ 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:18:23.890 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:23.890 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:18:24.148 { 01:18:24.148 "name": "Malloc0", 01:18:24.148 "aliases": [ 01:18:24.148 "d28ee856-f8b8-4861-b7ea-0e767f4721ea" 01:18:24.148 ], 01:18:24.148 "product_name": "Malloc disk", 01:18:24.148 "block_size": 512, 01:18:24.148 "num_blocks": 16384, 01:18:24.148 "uuid": "d28ee856-f8b8-4861-b7ea-0e767f4721ea", 01:18:24.148 "assigned_rate_limits": { 01:18:24.148 "rw_ios_per_sec": 0, 01:18:24.148 "rw_mbytes_per_sec": 0, 01:18:24.148 "r_mbytes_per_sec": 0, 01:18:24.148 "w_mbytes_per_sec": 0 01:18:24.148 }, 01:18:24.148 "claimed": false, 01:18:24.148 "zoned": false, 01:18:24.148 "supported_io_types": { 01:18:24.148 "read": true, 01:18:24.148 "write": true, 01:18:24.148 "unmap": true, 01:18:24.148 "flush": true, 01:18:24.148 "reset": true, 01:18:24.148 "nvme_admin": false, 01:18:24.148 "nvme_io": false, 01:18:24.148 "nvme_io_md": false, 01:18:24.148 "write_zeroes": true, 01:18:24.148 "zcopy": true, 01:18:24.148 "get_zone_info": false, 01:18:24.148 "zone_management": false, 01:18:24.148 "zone_append": false, 01:18:24.148 "compare": false, 01:18:24.148 "compare_and_write": false, 01:18:24.148 "abort": true, 01:18:24.148 "seek_hole": false, 01:18:24.148 "seek_data": false, 01:18:24.148 "copy": true, 01:18:24.148 "nvme_iov_md": false 01:18:24.148 }, 01:18:24.148 "memory_domains": [ 01:18:24.148 { 01:18:24.148 "dma_device_id": "system", 01:18:24.148 "dma_device_type": 1 01:18:24.148 }, 01:18:24.148 { 01:18:24.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:24.148 "dma_device_type": 2 01:18:24.148 } 01:18:24.148 ], 01:18:24.148 "driver_specific": {} 01:18:24.148 } 01:18:24.148 ]' 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 [2024-12-09 05:13:15.580411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:18:24.148 [2024-12-09 05:13:15.580526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:18:24.148 [2024-12-09 05:13:15.580561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 01:18:24.148 [2024-12-09 05:13:15.580593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:18:24.148 [2024-12-09 05:13:15.583589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:18:24.148 [2024-12-09 05:13:15.583657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:18:24.148 Passthru0 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:18:24.148 { 01:18:24.148 "name": "Malloc0", 01:18:24.148 "aliases": [ 01:18:24.148 "d28ee856-f8b8-4861-b7ea-0e767f4721ea" 01:18:24.148 ], 01:18:24.148 "product_name": "Malloc disk", 01:18:24.148 "block_size": 512, 01:18:24.148 "num_blocks": 16384, 01:18:24.148 "uuid": "d28ee856-f8b8-4861-b7ea-0e767f4721ea", 01:18:24.148 "assigned_rate_limits": { 01:18:24.148 "rw_ios_per_sec": 0, 01:18:24.148 "rw_mbytes_per_sec": 0, 01:18:24.148 "r_mbytes_per_sec": 0, 01:18:24.148 "w_mbytes_per_sec": 0 01:18:24.148 }, 01:18:24.148 "claimed": true, 01:18:24.148 "claim_type": "exclusive_write", 01:18:24.148 "zoned": false, 01:18:24.148 "supported_io_types": { 01:18:24.148 "read": true, 01:18:24.148 "write": true, 01:18:24.148 "unmap": true, 01:18:24.148 "flush": true, 01:18:24.148 "reset": true, 01:18:24.148 "nvme_admin": false, 01:18:24.148 "nvme_io": false, 01:18:24.148 "nvme_io_md": false, 01:18:24.148 "write_zeroes": true, 01:18:24.148 "zcopy": true, 01:18:24.148 "get_zone_info": false, 01:18:24.148 "zone_management": false, 01:18:24.148 "zone_append": false, 01:18:24.148 "compare": false, 01:18:24.148 "compare_and_write": false, 01:18:24.148 "abort": true, 01:18:24.148 "seek_hole": false, 01:18:24.148 "seek_data": false, 01:18:24.148 "copy": true, 01:18:24.148 "nvme_iov_md": false 01:18:24.148 }, 01:18:24.148 "memory_domains": [ 01:18:24.148 { 01:18:24.148 "dma_device_id": "system", 01:18:24.148 "dma_device_type": 1 01:18:24.148 }, 01:18:24.148 { 01:18:24.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:24.148 "dma_device_type": 2 01:18:24.148 } 01:18:24.148 ], 01:18:24.148 "driver_specific": {} 01:18:24.148 }, 01:18:24.148 { 01:18:24.148 "name": "Passthru0", 01:18:24.148 "aliases": [ 01:18:24.148 "e7acfc2e-5481-5fb0-a8e8-d2caf918c623" 01:18:24.148 ], 01:18:24.148 "product_name": "passthru", 01:18:24.148 "block_size": 512, 01:18:24.148 "num_blocks": 16384, 01:18:24.148 "uuid": "e7acfc2e-5481-5fb0-a8e8-d2caf918c623", 01:18:24.148 "assigned_rate_limits": { 01:18:24.148 "rw_ios_per_sec": 0, 01:18:24.148 "rw_mbytes_per_sec": 0, 01:18:24.148 "r_mbytes_per_sec": 0, 01:18:24.148 "w_mbytes_per_sec": 0 01:18:24.148 }, 01:18:24.148 "claimed": false, 01:18:24.148 "zoned": false, 01:18:24.148 "supported_io_types": { 01:18:24.148 "read": true, 01:18:24.148 "write": true, 01:18:24.148 "unmap": true, 01:18:24.148 "flush": true, 01:18:24.148 "reset": true, 01:18:24.148 "nvme_admin": false, 01:18:24.148 "nvme_io": false, 01:18:24.148 "nvme_io_md": false, 01:18:24.148 "write_zeroes": true, 01:18:24.148 "zcopy": true, 01:18:24.148 "get_zone_info": false, 01:18:24.148 "zone_management": false, 01:18:24.148 "zone_append": false, 01:18:24.148 "compare": false, 01:18:24.148 "compare_and_write": false, 01:18:24.148 "abort": true, 01:18:24.148 "seek_hole": false, 01:18:24.148 "seek_data": false, 01:18:24.148 "copy": true, 01:18:24.148 "nvme_iov_md": false 01:18:24.148 }, 01:18:24.148 "memory_domains": [ 01:18:24.148 { 01:18:24.148 "dma_device_id": "system", 01:18:24.148 "dma_device_type": 1 01:18:24.148 }, 01:18:24.148 { 01:18:24.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:24.148 "dma_device_type": 2 01:18:24.148 } 01:18:24.148 ], 01:18:24.148 "driver_specific": { 01:18:24.148 "passthru": { 01:18:24.148 "name": "Passthru0", 01:18:24.148 "base_bdev_name": "Malloc0" 01:18:24.148 } 01:18:24.148 } 01:18:24.148 } 01:18:24.148 ]' 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:18:24.148 05:13:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:18:24.148 01:18:24.148 real 0m0.341s 01:18:24.148 user 0m0.217s 01:18:24.148 sys 0m0.041s 01:18:24.148 ************************************ 01:18:24.148 END TEST rpc_integrity 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:24.148 05:13:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.148 ************************************ 01:18:24.406 05:13:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:18:24.406 05:13:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:24.406 05:13:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:24.406 05:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 01:18:24.406 ************************************ 01:18:24.406 START TEST rpc_plugins 01:18:24.406 ************************************ 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:18:24.406 { 01:18:24.406 "name": "Malloc1", 01:18:24.406 "aliases": [ 01:18:24.406 "98301332-6ec2-4064-93b9-1596dda808c8" 01:18:24.406 ], 01:18:24.406 "product_name": "Malloc disk", 01:18:24.406 "block_size": 4096, 01:18:24.406 "num_blocks": 256, 01:18:24.406 "uuid": "98301332-6ec2-4064-93b9-1596dda808c8", 01:18:24.406 "assigned_rate_limits": { 01:18:24.406 "rw_ios_per_sec": 0, 01:18:24.406 "rw_mbytes_per_sec": 0, 01:18:24.406 "r_mbytes_per_sec": 0, 01:18:24.406 "w_mbytes_per_sec": 0 01:18:24.406 }, 01:18:24.406 "claimed": false, 01:18:24.406 "zoned": false, 01:18:24.406 "supported_io_types": { 01:18:24.406 "read": true, 01:18:24.406 "write": true, 01:18:24.406 "unmap": true, 01:18:24.406 "flush": true, 01:18:24.406 "reset": true, 01:18:24.406 "nvme_admin": false, 01:18:24.406 "nvme_io": false, 01:18:24.406 "nvme_io_md": false, 01:18:24.406 "write_zeroes": true, 01:18:24.406 "zcopy": true, 01:18:24.406 "get_zone_info": false, 01:18:24.406 "zone_management": false, 01:18:24.406 "zone_append": false, 01:18:24.406 "compare": false, 01:18:24.406 "compare_and_write": false, 01:18:24.406 "abort": true, 01:18:24.406 "seek_hole": false, 01:18:24.406 "seek_data": false, 01:18:24.406 "copy": true, 01:18:24.406 "nvme_iov_md": false 01:18:24.406 }, 01:18:24.406 "memory_domains": [ 01:18:24.406 { 01:18:24.406 "dma_device_id": "system", 01:18:24.406 "dma_device_type": 1 01:18:24.406 }, 01:18:24.406 { 01:18:24.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:24.406 "dma_device_type": 2 01:18:24.406 } 01:18:24.406 ], 01:18:24.406 "driver_specific": {} 01:18:24.406 } 01:18:24.406 ]' 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:18:24.406 05:13:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:18:24.406 05:13:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:18:24.406 05:13:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:18:24.406 01:18:24.406 real 0m0.187s 01:18:24.406 user 0m0.112s 01:18:24.407 sys 0m0.019s 01:18:24.407 ************************************ 01:18:24.407 END TEST rpc_plugins 01:18:24.407 05:13:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:24.407 05:13:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:18:24.407 ************************************ 01:18:24.672 05:13:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:18:24.672 05:13:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:24.672 05:13:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:24.672 05:13:16 rpc -- common/autotest_common.sh@10 -- # set +x 01:18:24.672 ************************************ 01:18:24.672 START TEST rpc_trace_cmd_test 01:18:24.672 ************************************ 01:18:24.672 05:13:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 01:18:24.672 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:18:24.673 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56724", 01:18:24.673 "tpoint_group_mask": "0x8", 01:18:24.673 "iscsi_conn": { 01:18:24.673 "mask": "0x2", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "scsi": { 01:18:24.673 "mask": "0x4", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "bdev": { 01:18:24.673 "mask": "0x8", 01:18:24.673 "tpoint_mask": "0xffffffffffffffff" 01:18:24.673 }, 01:18:24.673 "nvmf_rdma": { 01:18:24.673 "mask": "0x10", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "nvmf_tcp": { 01:18:24.673 "mask": "0x20", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "ftl": { 01:18:24.673 "mask": "0x40", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "blobfs": { 01:18:24.673 "mask": "0x80", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "dsa": { 01:18:24.673 "mask": "0x200", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "thread": { 01:18:24.673 "mask": "0x400", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "nvme_pcie": { 01:18:24.673 "mask": "0x800", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "iaa": { 01:18:24.673 "mask": "0x1000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "nvme_tcp": { 01:18:24.673 "mask": "0x2000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "bdev_nvme": { 01:18:24.673 "mask": "0x4000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "sock": { 01:18:24.673 "mask": "0x8000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "blob": { 01:18:24.673 "mask": "0x10000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "bdev_raid": { 01:18:24.673 "mask": "0x20000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 }, 01:18:24.673 "scheduler": { 01:18:24.673 "mask": "0x40000", 01:18:24.673 "tpoint_mask": "0x0" 01:18:24.673 } 01:18:24.673 }' 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:18:24.673 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:18:24.940 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:18:24.940 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:18:24.940 05:13:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:18:24.940 01:18:24.940 real 0m0.287s 01:18:24.940 user 0m0.247s 01:18:24.940 sys 0m0.029s 01:18:24.940 ************************************ 01:18:24.940 END TEST rpc_trace_cmd_test 01:18:24.940 ************************************ 01:18:24.940 05:13:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:24.940 05:13:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:18:24.940 05:13:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 01:18:24.940 05:13:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:18:24.940 05:13:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:18:24.940 05:13:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:24.940 05:13:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:24.940 05:13:16 rpc -- common/autotest_common.sh@10 -- # set +x 01:18:24.940 ************************************ 01:18:24.940 START TEST rpc_daemon_integrity 01:18:24.940 ************************************ 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:18:24.940 { 01:18:24.940 "name": "Malloc2", 01:18:24.940 "aliases": [ 01:18:24.940 "8b56d406-b1b6-4f3c-a1a3-816cce35bec4" 01:18:24.940 ], 01:18:24.940 "product_name": "Malloc disk", 01:18:24.940 "block_size": 512, 01:18:24.940 "num_blocks": 16384, 01:18:24.940 "uuid": "8b56d406-b1b6-4f3c-a1a3-816cce35bec4", 01:18:24.940 "assigned_rate_limits": { 01:18:24.940 "rw_ios_per_sec": 0, 01:18:24.940 "rw_mbytes_per_sec": 0, 01:18:24.940 "r_mbytes_per_sec": 0, 01:18:24.940 "w_mbytes_per_sec": 0 01:18:24.940 }, 01:18:24.940 "claimed": false, 01:18:24.940 "zoned": false, 01:18:24.940 "supported_io_types": { 01:18:24.940 "read": true, 01:18:24.940 "write": true, 01:18:24.940 "unmap": true, 01:18:24.940 "flush": true, 01:18:24.940 "reset": true, 01:18:24.940 "nvme_admin": false, 01:18:24.940 "nvme_io": false, 01:18:24.940 "nvme_io_md": false, 01:18:24.940 "write_zeroes": true, 01:18:24.940 "zcopy": true, 01:18:24.940 "get_zone_info": false, 01:18:24.940 "zone_management": false, 01:18:24.940 "zone_append": false, 01:18:24.940 "compare": false, 01:18:24.940 "compare_and_write": false, 01:18:24.940 "abort": true, 01:18:24.940 "seek_hole": false, 01:18:24.940 "seek_data": false, 01:18:24.940 "copy": true, 01:18:24.940 "nvme_iov_md": false 01:18:24.940 }, 01:18:24.940 "memory_domains": [ 01:18:24.940 { 01:18:24.940 "dma_device_id": "system", 01:18:24.940 "dma_device_type": 1 01:18:24.940 }, 01:18:24.940 { 01:18:24.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:24.940 "dma_device_type": 2 01:18:24.940 } 01:18:24.940 ], 01:18:24.940 "driver_specific": {} 01:18:24.940 } 01:18:24.940 ]' 01:18:24.940 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:18:25.198 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:25.199 [2024-12-09 05:13:16.566864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 01:18:25.199 [2024-12-09 05:13:16.566944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:18:25.199 [2024-12-09 05:13:16.566972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:18:25.199 [2024-12-09 05:13:16.566990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:18:25.199 [2024-12-09 05:13:16.570052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:18:25.199 [2024-12-09 05:13:16.570124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:18:25.199 Passthru0 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:18:25.199 { 01:18:25.199 "name": "Malloc2", 01:18:25.199 "aliases": [ 01:18:25.199 "8b56d406-b1b6-4f3c-a1a3-816cce35bec4" 01:18:25.199 ], 01:18:25.199 "product_name": "Malloc disk", 01:18:25.199 "block_size": 512, 01:18:25.199 "num_blocks": 16384, 01:18:25.199 "uuid": "8b56d406-b1b6-4f3c-a1a3-816cce35bec4", 01:18:25.199 "assigned_rate_limits": { 01:18:25.199 "rw_ios_per_sec": 0, 01:18:25.199 "rw_mbytes_per_sec": 0, 01:18:25.199 "r_mbytes_per_sec": 0, 01:18:25.199 "w_mbytes_per_sec": 0 01:18:25.199 }, 01:18:25.199 "claimed": true, 01:18:25.199 "claim_type": "exclusive_write", 01:18:25.199 "zoned": false, 01:18:25.199 "supported_io_types": { 01:18:25.199 "read": true, 01:18:25.199 "write": true, 01:18:25.199 "unmap": true, 01:18:25.199 "flush": true, 01:18:25.199 "reset": true, 01:18:25.199 "nvme_admin": false, 01:18:25.199 "nvme_io": false, 01:18:25.199 "nvme_io_md": false, 01:18:25.199 "write_zeroes": true, 01:18:25.199 "zcopy": true, 01:18:25.199 "get_zone_info": false, 01:18:25.199 "zone_management": false, 01:18:25.199 "zone_append": false, 01:18:25.199 "compare": false, 01:18:25.199 "compare_and_write": false, 01:18:25.199 "abort": true, 01:18:25.199 "seek_hole": false, 01:18:25.199 "seek_data": false, 01:18:25.199 "copy": true, 01:18:25.199 "nvme_iov_md": false 01:18:25.199 }, 01:18:25.199 "memory_domains": [ 01:18:25.199 { 01:18:25.199 "dma_device_id": "system", 01:18:25.199 "dma_device_type": 1 01:18:25.199 }, 01:18:25.199 { 01:18:25.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:25.199 "dma_device_type": 2 01:18:25.199 } 01:18:25.199 ], 01:18:25.199 "driver_specific": {} 01:18:25.199 }, 01:18:25.199 { 01:18:25.199 "name": "Passthru0", 01:18:25.199 "aliases": [ 01:18:25.199 "a8e0e8f3-eb8e-5a0f-b170-bfa61ba1f5eb" 01:18:25.199 ], 01:18:25.199 "product_name": "passthru", 01:18:25.199 "block_size": 512, 01:18:25.199 "num_blocks": 16384, 01:18:25.199 "uuid": "a8e0e8f3-eb8e-5a0f-b170-bfa61ba1f5eb", 01:18:25.199 "assigned_rate_limits": { 01:18:25.199 "rw_ios_per_sec": 0, 01:18:25.199 "rw_mbytes_per_sec": 0, 01:18:25.199 "r_mbytes_per_sec": 0, 01:18:25.199 "w_mbytes_per_sec": 0 01:18:25.199 }, 01:18:25.199 "claimed": false, 01:18:25.199 "zoned": false, 01:18:25.199 "supported_io_types": { 01:18:25.199 "read": true, 01:18:25.199 "write": true, 01:18:25.199 "unmap": true, 01:18:25.199 "flush": true, 01:18:25.199 "reset": true, 01:18:25.199 "nvme_admin": false, 01:18:25.199 "nvme_io": false, 01:18:25.199 "nvme_io_md": false, 01:18:25.199 "write_zeroes": true, 01:18:25.199 "zcopy": true, 01:18:25.199 "get_zone_info": false, 01:18:25.199 "zone_management": false, 01:18:25.199 "zone_append": false, 01:18:25.199 "compare": false, 01:18:25.199 "compare_and_write": false, 01:18:25.199 "abort": true, 01:18:25.199 "seek_hole": false, 01:18:25.199 "seek_data": false, 01:18:25.199 "copy": true, 01:18:25.199 "nvme_iov_md": false 01:18:25.199 }, 01:18:25.199 "memory_domains": [ 01:18:25.199 { 01:18:25.199 "dma_device_id": "system", 01:18:25.199 "dma_device_type": 1 01:18:25.199 }, 01:18:25.199 { 01:18:25.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:18:25.199 "dma_device_type": 2 01:18:25.199 } 01:18:25.199 ], 01:18:25.199 "driver_specific": { 01:18:25.199 "passthru": { 01:18:25.199 "name": "Passthru0", 01:18:25.199 "base_bdev_name": "Malloc2" 01:18:25.199 } 01:18:25.199 } 01:18:25.199 } 01:18:25.199 ]' 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:18:25.199 01:18:25.199 real 0m0.372s 01:18:25.199 user 0m0.232s 01:18:25.199 sys 0m0.049s 01:18:25.199 ************************************ 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:25.199 05:13:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:18:25.199 END TEST rpc_daemon_integrity 01:18:25.199 ************************************ 01:18:25.458 05:13:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:18:25.458 05:13:16 rpc -- rpc/rpc.sh@84 -- # killprocess 56724 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@954 -- # '[' -z 56724 ']' 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@958 -- # kill -0 56724 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@959 -- # uname 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56724 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:25.458 killing process with pid 56724 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56724' 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@973 -- # kill 56724 01:18:25.458 05:13:16 rpc -- common/autotest_common.sh@978 -- # wait 56724 01:18:27.361 01:18:27.361 real 0m4.955s 01:18:27.361 user 0m5.651s 01:18:27.361 sys 0m1.000s 01:18:27.361 05:13:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:27.361 ************************************ 01:18:27.361 END TEST rpc 01:18:27.361 ************************************ 01:18:27.361 05:13:18 rpc -- common/autotest_common.sh@10 -- # set +x 01:18:27.361 05:13:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:18:27.361 05:13:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:27.361 05:13:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:27.361 05:13:18 -- common/autotest_common.sh@10 -- # set +x 01:18:27.361 ************************************ 01:18:27.361 START TEST skip_rpc 01:18:27.361 ************************************ 01:18:27.361 05:13:18 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:18:27.620 * Looking for test storage... 01:18:27.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:18:27.620 05:13:19 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:27.620 05:13:19 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:27.620 05:13:19 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@345 -- # : 1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:27.646 05:13:19 skip_rpc -- scripts/common.sh@368 -- # return 0 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:27.646 --rc genhtml_branch_coverage=1 01:18:27.646 --rc genhtml_function_coverage=1 01:18:27.646 --rc genhtml_legend=1 01:18:27.646 --rc geninfo_all_blocks=1 01:18:27.646 --rc geninfo_unexecuted_blocks=1 01:18:27.646 01:18:27.646 ' 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:27.646 --rc genhtml_branch_coverage=1 01:18:27.646 --rc genhtml_function_coverage=1 01:18:27.646 --rc genhtml_legend=1 01:18:27.646 --rc geninfo_all_blocks=1 01:18:27.646 --rc geninfo_unexecuted_blocks=1 01:18:27.646 01:18:27.646 ' 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:27.646 --rc genhtml_branch_coverage=1 01:18:27.646 --rc genhtml_function_coverage=1 01:18:27.646 --rc genhtml_legend=1 01:18:27.646 --rc geninfo_all_blocks=1 01:18:27.646 --rc geninfo_unexecuted_blocks=1 01:18:27.646 01:18:27.646 ' 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:27.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:27.646 --rc genhtml_branch_coverage=1 01:18:27.646 --rc genhtml_function_coverage=1 01:18:27.646 --rc genhtml_legend=1 01:18:27.646 --rc geninfo_all_blocks=1 01:18:27.646 --rc geninfo_unexecuted_blocks=1 01:18:27.646 01:18:27.646 ' 01:18:27.646 05:13:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:18:27.646 05:13:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:18:27.646 05:13:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:27.646 05:13:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:27.646 ************************************ 01:18:27.646 START TEST skip_rpc 01:18:27.646 ************************************ 01:18:27.646 05:13:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 01:18:27.646 05:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56953 01:18:27.646 05:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:18:27.646 05:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:18:27.646 05:13:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:18:27.905 [2024-12-09 05:13:19.243734] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:27.905 [2024-12-09 05:13:19.243934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56953 ] 01:18:27.905 [2024-12-09 05:13:19.424473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:28.164 [2024-12-09 05:13:19.544124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:33.431 05:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:18:33.431 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 01:18:33.431 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 01:18:33.431 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:18:33.431 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56953 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56953 ']' 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56953 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56953 01:18:33.432 killing process with pid 56953 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56953' 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56953 01:18:33.432 05:13:24 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56953 01:18:34.807 01:18:34.807 real 0m7.141s 01:18:34.807 user 0m6.576s 01:18:34.807 sys 0m0.464s 01:18:34.807 05:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:34.807 05:13:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:34.807 ************************************ 01:18:34.807 END TEST skip_rpc 01:18:34.807 ************************************ 01:18:34.807 05:13:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:18:34.807 05:13:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:34.807 05:13:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:34.807 05:13:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:34.807 ************************************ 01:18:34.807 START TEST skip_rpc_with_json 01:18:34.807 ************************************ 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57057 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57057 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57057 ']' 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:34.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:34.807 05:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:18:35.065 [2024-12-09 05:13:26.438239] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:35.065 [2024-12-09 05:13:26.438442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57057 ] 01:18:35.065 [2024-12-09 05:13:26.619242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:35.324 [2024-12-09 05:13:26.746615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:36.261 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:36.261 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 01:18:36.261 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:18:36.261 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:36.261 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:18:36.261 [2024-12-09 05:13:27.563239] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:18:36.261 request: 01:18:36.261 { 01:18:36.261 "trtype": "tcp", 01:18:36.261 "method": "nvmf_get_transports", 01:18:36.261 "req_id": 1 01:18:36.261 } 01:18:36.261 Got JSON-RPC error response 01:18:36.262 response: 01:18:36.262 { 01:18:36.262 "code": -19, 01:18:36.262 "message": "No such device" 01:18:36.262 } 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:18:36.262 [2024-12-09 05:13:27.575356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:36.262 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:18:36.262 { 01:18:36.262 "subsystems": [ 01:18:36.262 { 01:18:36.262 "subsystem": "fsdev", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "fsdev_set_opts", 01:18:36.262 "params": { 01:18:36.262 "fsdev_io_pool_size": 65535, 01:18:36.262 "fsdev_io_cache_size": 256 01:18:36.262 } 01:18:36.262 } 01:18:36.262 ] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "keyring", 01:18:36.262 "config": [] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "iobuf", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "iobuf_set_options", 01:18:36.262 "params": { 01:18:36.262 "small_pool_count": 8192, 01:18:36.262 "large_pool_count": 1024, 01:18:36.262 "small_bufsize": 8192, 01:18:36.262 "large_bufsize": 135168, 01:18:36.262 "enable_numa": false 01:18:36.262 } 01:18:36.262 } 01:18:36.262 ] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "sock", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "sock_set_default_impl", 01:18:36.262 "params": { 01:18:36.262 "impl_name": "posix" 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "sock_impl_set_options", 01:18:36.262 "params": { 01:18:36.262 "impl_name": "ssl", 01:18:36.262 "recv_buf_size": 4096, 01:18:36.262 "send_buf_size": 4096, 01:18:36.262 "enable_recv_pipe": true, 01:18:36.262 "enable_quickack": false, 01:18:36.262 "enable_placement_id": 0, 01:18:36.262 "enable_zerocopy_send_server": true, 01:18:36.262 "enable_zerocopy_send_client": false, 01:18:36.262 "zerocopy_threshold": 0, 01:18:36.262 "tls_version": 0, 01:18:36.262 "enable_ktls": false 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "sock_impl_set_options", 01:18:36.262 "params": { 01:18:36.262 "impl_name": "posix", 01:18:36.262 "recv_buf_size": 2097152, 01:18:36.262 "send_buf_size": 2097152, 01:18:36.262 "enable_recv_pipe": true, 01:18:36.262 "enable_quickack": false, 01:18:36.262 "enable_placement_id": 0, 01:18:36.262 "enable_zerocopy_send_server": true, 01:18:36.262 "enable_zerocopy_send_client": false, 01:18:36.262 "zerocopy_threshold": 0, 01:18:36.262 "tls_version": 0, 01:18:36.262 "enable_ktls": false 01:18:36.262 } 01:18:36.262 } 01:18:36.262 ] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "vmd", 01:18:36.262 "config": [] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "accel", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "accel_set_options", 01:18:36.262 "params": { 01:18:36.262 "small_cache_size": 128, 01:18:36.262 "large_cache_size": 16, 01:18:36.262 "task_count": 2048, 01:18:36.262 "sequence_count": 2048, 01:18:36.262 "buf_count": 2048 01:18:36.262 } 01:18:36.262 } 01:18:36.262 ] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "bdev", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "bdev_set_options", 01:18:36.262 "params": { 01:18:36.262 "bdev_io_pool_size": 65535, 01:18:36.262 "bdev_io_cache_size": 256, 01:18:36.262 "bdev_auto_examine": true, 01:18:36.262 "iobuf_small_cache_size": 128, 01:18:36.262 "iobuf_large_cache_size": 16 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "bdev_raid_set_options", 01:18:36.262 "params": { 01:18:36.262 "process_window_size_kb": 1024, 01:18:36.262 "process_max_bandwidth_mb_sec": 0 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "bdev_iscsi_set_options", 01:18:36.262 "params": { 01:18:36.262 "timeout_sec": 30 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "bdev_nvme_set_options", 01:18:36.262 "params": { 01:18:36.262 "action_on_timeout": "none", 01:18:36.262 "timeout_us": 0, 01:18:36.262 "timeout_admin_us": 0, 01:18:36.262 "keep_alive_timeout_ms": 10000, 01:18:36.262 "arbitration_burst": 0, 01:18:36.262 "low_priority_weight": 0, 01:18:36.262 "medium_priority_weight": 0, 01:18:36.262 "high_priority_weight": 0, 01:18:36.262 "nvme_adminq_poll_period_us": 10000, 01:18:36.262 "nvme_ioq_poll_period_us": 0, 01:18:36.262 "io_queue_requests": 0, 01:18:36.262 "delay_cmd_submit": true, 01:18:36.262 "transport_retry_count": 4, 01:18:36.262 "bdev_retry_count": 3, 01:18:36.262 "transport_ack_timeout": 0, 01:18:36.262 "ctrlr_loss_timeout_sec": 0, 01:18:36.262 "reconnect_delay_sec": 0, 01:18:36.262 "fast_io_fail_timeout_sec": 0, 01:18:36.262 "disable_auto_failback": false, 01:18:36.262 "generate_uuids": false, 01:18:36.262 "transport_tos": 0, 01:18:36.262 "nvme_error_stat": false, 01:18:36.262 "rdma_srq_size": 0, 01:18:36.262 "io_path_stat": false, 01:18:36.262 "allow_accel_sequence": false, 01:18:36.262 "rdma_max_cq_size": 0, 01:18:36.262 "rdma_cm_event_timeout_ms": 0, 01:18:36.262 "dhchap_digests": [ 01:18:36.262 "sha256", 01:18:36.262 "sha384", 01:18:36.262 "sha512" 01:18:36.262 ], 01:18:36.262 "dhchap_dhgroups": [ 01:18:36.262 "null", 01:18:36.262 "ffdhe2048", 01:18:36.262 "ffdhe3072", 01:18:36.262 "ffdhe4096", 01:18:36.262 "ffdhe6144", 01:18:36.262 "ffdhe8192" 01:18:36.262 ] 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "bdev_nvme_set_hotplug", 01:18:36.262 "params": { 01:18:36.262 "period_us": 100000, 01:18:36.262 "enable": false 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "bdev_wait_for_examine" 01:18:36.262 } 01:18:36.262 ] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "scsi", 01:18:36.262 "config": null 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "scheduler", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "framework_set_scheduler", 01:18:36.262 "params": { 01:18:36.262 "name": "static" 01:18:36.262 } 01:18:36.262 } 01:18:36.262 ] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "vhost_scsi", 01:18:36.262 "config": [] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "vhost_blk", 01:18:36.262 "config": [] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "ublk", 01:18:36.262 "config": [] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "nbd", 01:18:36.262 "config": [] 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "subsystem": "nvmf", 01:18:36.262 "config": [ 01:18:36.262 { 01:18:36.262 "method": "nvmf_set_config", 01:18:36.262 "params": { 01:18:36.262 "discovery_filter": "match_any", 01:18:36.262 "admin_cmd_passthru": { 01:18:36.262 "identify_ctrlr": false 01:18:36.262 }, 01:18:36.262 "dhchap_digests": [ 01:18:36.262 "sha256", 01:18:36.262 "sha384", 01:18:36.262 "sha512" 01:18:36.262 ], 01:18:36.262 "dhchap_dhgroups": [ 01:18:36.262 "null", 01:18:36.262 "ffdhe2048", 01:18:36.262 "ffdhe3072", 01:18:36.262 "ffdhe4096", 01:18:36.262 "ffdhe6144", 01:18:36.262 "ffdhe8192" 01:18:36.262 ] 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "nvmf_set_max_subsystems", 01:18:36.262 "params": { 01:18:36.262 "max_subsystems": 1024 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "nvmf_set_crdt", 01:18:36.262 "params": { 01:18:36.262 "crdt1": 0, 01:18:36.262 "crdt2": 0, 01:18:36.262 "crdt3": 0 01:18:36.262 } 01:18:36.262 }, 01:18:36.262 { 01:18:36.262 "method": "nvmf_create_transport", 01:18:36.262 "params": { 01:18:36.262 "trtype": "TCP", 01:18:36.262 "max_queue_depth": 128, 01:18:36.262 "max_io_qpairs_per_ctrlr": 127, 01:18:36.262 "in_capsule_data_size": 4096, 01:18:36.262 "max_io_size": 131072, 01:18:36.262 "io_unit_size": 131072, 01:18:36.262 "max_aq_depth": 128, 01:18:36.262 "num_shared_buffers": 511, 01:18:36.262 "buf_cache_size": 4294967295, 01:18:36.262 "dif_insert_or_strip": false, 01:18:36.262 "zcopy": false, 01:18:36.263 "c2h_success": true, 01:18:36.263 "sock_priority": 0, 01:18:36.263 "abort_timeout_sec": 1, 01:18:36.263 "ack_timeout": 0, 01:18:36.263 "data_wr_pool_size": 0 01:18:36.263 } 01:18:36.263 } 01:18:36.263 ] 01:18:36.263 }, 01:18:36.263 { 01:18:36.263 "subsystem": "iscsi", 01:18:36.263 "config": [ 01:18:36.263 { 01:18:36.263 "method": "iscsi_set_options", 01:18:36.263 "params": { 01:18:36.263 "node_base": "iqn.2016-06.io.spdk", 01:18:36.263 "max_sessions": 128, 01:18:36.263 "max_connections_per_session": 2, 01:18:36.263 "max_queue_depth": 64, 01:18:36.263 "default_time2wait": 2, 01:18:36.263 "default_time2retain": 20, 01:18:36.263 "first_burst_length": 8192, 01:18:36.263 "immediate_data": true, 01:18:36.263 "allow_duplicated_isid": false, 01:18:36.263 "error_recovery_level": 0, 01:18:36.263 "nop_timeout": 60, 01:18:36.263 "nop_in_interval": 30, 01:18:36.263 "disable_chap": false, 01:18:36.263 "require_chap": false, 01:18:36.263 "mutual_chap": false, 01:18:36.263 "chap_group": 0, 01:18:36.263 "max_large_datain_per_connection": 64, 01:18:36.263 "max_r2t_per_connection": 4, 01:18:36.263 "pdu_pool_size": 36864, 01:18:36.263 "immediate_data_pool_size": 16384, 01:18:36.263 "data_out_pool_size": 2048 01:18:36.263 } 01:18:36.263 } 01:18:36.263 ] 01:18:36.263 } 01:18:36.263 ] 01:18:36.263 } 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57057 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57057 ']' 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57057 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57057 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:36.263 killing process with pid 57057 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57057' 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57057 01:18:36.263 05:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57057 01:18:38.795 05:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57108 01:18:38.795 05:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:18:38.795 05:13:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57108 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57108 ']' 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57108 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57108 01:18:44.060 killing process with pid 57108 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57108' 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57108 01:18:44.060 05:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57108 01:18:45.965 05:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:18:45.965 05:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:18:45.965 ************************************ 01:18:45.965 END TEST skip_rpc_with_json 01:18:45.965 ************************************ 01:18:45.965 01:18:45.965 real 0m10.800s 01:18:45.965 user 0m10.137s 01:18:45.965 sys 0m1.023s 01:18:45.965 05:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:18:45.966 05:13:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:18:45.966 05:13:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:45.966 05:13:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:45.966 05:13:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:45.966 ************************************ 01:18:45.966 START TEST skip_rpc_with_delay 01:18:45.966 ************************************ 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:18:45.966 [2024-12-09 05:13:37.293730] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:18:45.966 01:18:45.966 real 0m0.205s 01:18:45.966 user 0m0.097s 01:18:45.966 sys 0m0.106s 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:45.966 ************************************ 01:18:45.966 END TEST skip_rpc_with_delay 01:18:45.966 ************************************ 01:18:45.966 05:13:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:18:45.966 05:13:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:18:45.966 05:13:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:18:45.966 05:13:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:18:45.966 05:13:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:45.966 05:13:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:45.966 05:13:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:45.966 ************************************ 01:18:45.966 START TEST exit_on_failed_rpc_init 01:18:45.966 ************************************ 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57236 01:18:45.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57236 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57236 ']' 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:45.966 05:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:18:45.966 [2024-12-09 05:13:37.552404] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:45.966 [2024-12-09 05:13:37.552597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57236 ] 01:18:46.225 [2024-12-09 05:13:37.732595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:46.484 [2024-12-09 05:13:37.864228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:18:47.422 05:13:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:18:47.422 [2024-12-09 05:13:38.827320] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:47.422 [2024-12-09 05:13:38.827503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57259 ] 01:18:47.422 [2024-12-09 05:13:38.999207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:47.682 [2024-12-09 05:13:39.158373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:47.682 [2024-12-09 05:13:39.158720] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:18:47.682 [2024-12-09 05:13:39.158818] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:18:47.682 [2024-12-09 05:13:39.159147] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57236 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57236 ']' 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57236 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57236 01:18:47.941 killing process with pid 57236 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57236' 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57236 01:18:47.941 05:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57236 01:18:50.559 ************************************ 01:18:50.559 END TEST exit_on_failed_rpc_init 01:18:50.559 ************************************ 01:18:50.559 01:18:50.559 real 0m4.315s 01:18:50.559 user 0m4.672s 01:18:50.559 sys 0m0.748s 01:18:50.559 05:13:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:50.559 05:13:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:18:50.559 05:13:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:18:50.559 01:18:50.559 real 0m22.864s 01:18:50.559 user 0m21.670s 01:18:50.559 sys 0m2.542s 01:18:50.559 05:13:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:50.559 ************************************ 01:18:50.559 END TEST skip_rpc 01:18:50.559 ************************************ 01:18:50.559 05:13:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:50.559 05:13:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:18:50.559 05:13:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:50.559 05:13:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:50.559 05:13:41 -- common/autotest_common.sh@10 -- # set +x 01:18:50.559 ************************************ 01:18:50.559 START TEST rpc_client 01:18:50.559 ************************************ 01:18:50.559 05:13:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:18:50.559 * Looking for test storage... 01:18:50.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:18:50.559 05:13:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:50.560 05:13:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 01:18:50.560 05:13:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@345 -- # : 1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@353 -- # local d=1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@355 -- # echo 1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@353 -- # local d=2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@355 -- # echo 2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:50.560 05:13:42 rpc_client -- scripts/common.sh@368 -- # return 0 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:50.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.560 --rc genhtml_branch_coverage=1 01:18:50.560 --rc genhtml_function_coverage=1 01:18:50.560 --rc genhtml_legend=1 01:18:50.560 --rc geninfo_all_blocks=1 01:18:50.560 --rc geninfo_unexecuted_blocks=1 01:18:50.560 01:18:50.560 ' 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:50.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.560 --rc genhtml_branch_coverage=1 01:18:50.560 --rc genhtml_function_coverage=1 01:18:50.560 --rc genhtml_legend=1 01:18:50.560 --rc geninfo_all_blocks=1 01:18:50.560 --rc geninfo_unexecuted_blocks=1 01:18:50.560 01:18:50.560 ' 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:50.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.560 --rc genhtml_branch_coverage=1 01:18:50.560 --rc genhtml_function_coverage=1 01:18:50.560 --rc genhtml_legend=1 01:18:50.560 --rc geninfo_all_blocks=1 01:18:50.560 --rc geninfo_unexecuted_blocks=1 01:18:50.560 01:18:50.560 ' 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:50.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.560 --rc genhtml_branch_coverage=1 01:18:50.560 --rc genhtml_function_coverage=1 01:18:50.560 --rc genhtml_legend=1 01:18:50.560 --rc geninfo_all_blocks=1 01:18:50.560 --rc geninfo_unexecuted_blocks=1 01:18:50.560 01:18:50.560 ' 01:18:50.560 05:13:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:18:50.560 OK 01:18:50.560 05:13:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:18:50.560 01:18:50.560 real 0m0.267s 01:18:50.560 user 0m0.161s 01:18:50.560 sys 0m0.113s 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:50.560 05:13:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:18:50.560 ************************************ 01:18:50.560 END TEST rpc_client 01:18:50.560 ************************************ 01:18:50.560 05:13:42 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:18:50.560 05:13:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:50.560 05:13:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:50.560 05:13:42 -- common/autotest_common.sh@10 -- # set +x 01:18:50.560 ************************************ 01:18:50.560 START TEST json_config 01:18:50.560 ************************************ 01:18:50.560 05:13:42 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:18:50.826 05:13:42 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:50.826 05:13:42 json_config -- common/autotest_common.sh@1693 -- # lcov --version 01:18:50.826 05:13:42 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:50.827 05:13:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:50.827 05:13:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 01:18:50.827 05:13:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 01:18:50.827 05:13:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 01:18:50.827 05:13:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:50.827 05:13:42 json_config -- scripts/common.sh@344 -- # case "$op" in 01:18:50.827 05:13:42 json_config -- scripts/common.sh@345 -- # : 1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:50.827 05:13:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:50.827 05:13:42 json_config -- scripts/common.sh@365 -- # decimal 1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@353 -- # local d=1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:50.827 05:13:42 json_config -- scripts/common.sh@355 -- # echo 1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 01:18:50.827 05:13:42 json_config -- scripts/common.sh@366 -- # decimal 2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@353 -- # local d=2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:50.827 05:13:42 json_config -- scripts/common.sh@355 -- # echo 2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 01:18:50.827 05:13:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:50.827 05:13:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:50.827 05:13:42 json_config -- scripts/common.sh@368 -- # return 0 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:50.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.827 --rc genhtml_branch_coverage=1 01:18:50.827 --rc genhtml_function_coverage=1 01:18:50.827 --rc genhtml_legend=1 01:18:50.827 --rc geninfo_all_blocks=1 01:18:50.827 --rc geninfo_unexecuted_blocks=1 01:18:50.827 01:18:50.827 ' 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:50.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.827 --rc genhtml_branch_coverage=1 01:18:50.827 --rc genhtml_function_coverage=1 01:18:50.827 --rc genhtml_legend=1 01:18:50.827 --rc geninfo_all_blocks=1 01:18:50.827 --rc geninfo_unexecuted_blocks=1 01:18:50.827 01:18:50.827 ' 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:50.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.827 --rc genhtml_branch_coverage=1 01:18:50.827 --rc genhtml_function_coverage=1 01:18:50.827 --rc genhtml_legend=1 01:18:50.827 --rc geninfo_all_blocks=1 01:18:50.827 --rc geninfo_unexecuted_blocks=1 01:18:50.827 01:18:50.827 ' 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:50.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.827 --rc genhtml_branch_coverage=1 01:18:50.827 --rc genhtml_function_coverage=1 01:18:50.827 --rc genhtml_legend=1 01:18:50.827 --rc geninfo_all_blocks=1 01:18:50.827 --rc geninfo_unexecuted_blocks=1 01:18:50.827 01:18:50.827 ' 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@7 -- # uname -s 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806ada1f-4f7d-4439-bb20-849f8d3247b8 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=806ada1f-4f7d-4439-bb20-849f8d3247b8 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:50.827 05:13:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 01:18:50.827 05:13:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:50.827 05:13:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:50.827 05:13:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:50.827 05:13:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.827 05:13:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.827 05:13:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.827 05:13:42 json_config -- paths/export.sh@5 -- # export PATH 01:18:50.827 05:13:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@51 -- # : 0 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:50.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:50.827 05:13:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:50.827 WARNING: No tests are enabled so not running JSON configuration tests 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 01:18:50.827 05:13:42 json_config -- json_config/json_config.sh@28 -- # exit 0 01:18:50.827 01:18:50.827 real 0m0.194s 01:18:50.827 user 0m0.122s 01:18:50.827 sys 0m0.073s 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:50.827 05:13:42 json_config -- common/autotest_common.sh@10 -- # set +x 01:18:50.827 ************************************ 01:18:50.827 END TEST json_config 01:18:50.827 ************************************ 01:18:50.827 05:13:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:18:50.827 05:13:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:50.827 05:13:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:50.827 05:13:42 -- common/autotest_common.sh@10 -- # set +x 01:18:50.827 ************************************ 01:18:50.827 START TEST json_config_extra_key 01:18:50.827 ************************************ 01:18:50.827 05:13:42 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:51.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:51.087 --rc genhtml_branch_coverage=1 01:18:51.087 --rc genhtml_function_coverage=1 01:18:51.087 --rc genhtml_legend=1 01:18:51.087 --rc geninfo_all_blocks=1 01:18:51.087 --rc geninfo_unexecuted_blocks=1 01:18:51.087 01:18:51.087 ' 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:51.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:51.087 --rc genhtml_branch_coverage=1 01:18:51.087 --rc genhtml_function_coverage=1 01:18:51.087 --rc genhtml_legend=1 01:18:51.087 --rc geninfo_all_blocks=1 01:18:51.087 --rc geninfo_unexecuted_blocks=1 01:18:51.087 01:18:51.087 ' 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:51.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:51.087 --rc genhtml_branch_coverage=1 01:18:51.087 --rc genhtml_function_coverage=1 01:18:51.087 --rc genhtml_legend=1 01:18:51.087 --rc geninfo_all_blocks=1 01:18:51.087 --rc geninfo_unexecuted_blocks=1 01:18:51.087 01:18:51.087 ' 01:18:51.087 05:13:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:51.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:51.087 --rc genhtml_branch_coverage=1 01:18:51.087 --rc genhtml_function_coverage=1 01:18:51.087 --rc genhtml_legend=1 01:18:51.087 --rc geninfo_all_blocks=1 01:18:51.087 --rc geninfo_unexecuted_blocks=1 01:18:51.087 01:18:51.087 ' 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:806ada1f-4f7d-4439-bb20-849f8d3247b8 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=806ada1f-4f7d-4439-bb20-849f8d3247b8 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:51.087 05:13:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:51.087 05:13:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.087 05:13:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.087 05:13:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.087 05:13:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:18:51.087 05:13:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:51.087 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:51.087 05:13:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:18:51.087 INFO: launching applications... 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:18:51.087 05:13:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:18:51.087 05:13:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57464 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:18:51.088 Waiting for target to run... 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:18:51.088 05:13:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57464 /var/tmp/spdk_tgt.sock 01:18:51.088 05:13:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57464 ']' 01:18:51.088 05:13:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:18:51.088 05:13:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:51.088 05:13:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:18:51.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:18:51.088 05:13:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:51.088 05:13:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:18:51.346 [2024-12-09 05:13:42.729005] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:51.346 [2024-12-09 05:13:42.729564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57464 ] 01:18:51.975 [2024-12-09 05:13:43.234896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:51.975 [2024-12-09 05:13:43.384610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:52.541 05:13:43 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:52.541 01:18:52.541 INFO: shutting down applications... 01:18:52.542 05:13:43 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:18:52.542 05:13:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:18:52.542 05:13:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57464 ]] 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57464 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57464 01:18:52.542 05:13:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:18:53.108 05:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:18:53.108 05:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:18:53.108 05:13:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57464 01:18:53.108 05:13:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:18:53.674 05:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:18:53.674 05:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:18:53.674 05:13:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57464 01:18:53.674 05:13:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:18:53.933 05:13:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:18:53.933 05:13:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:18:53.933 05:13:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57464 01:18:53.933 05:13:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:18:54.500 05:13:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:18:54.500 05:13:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:18:54.500 05:13:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57464 01:18:54.500 05:13:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57464 01:18:55.067 SPDK target shutdown done 01:18:55.067 Success 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@43 -- # break 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:18:55.067 05:13:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:18:55.067 05:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:18:55.067 01:18:55.067 real 0m4.103s 01:18:55.067 user 0m3.866s 01:18:55.067 sys 0m0.670s 01:18:55.067 ************************************ 01:18:55.067 END TEST json_config_extra_key 01:18:55.067 ************************************ 01:18:55.067 05:13:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:55.067 05:13:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:18:55.067 05:13:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:18:55.067 05:13:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:55.067 05:13:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:55.067 05:13:46 -- common/autotest_common.sh@10 -- # set +x 01:18:55.067 ************************************ 01:18:55.067 START TEST alias_rpc 01:18:55.067 ************************************ 01:18:55.067 05:13:46 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:18:55.067 * Looking for test storage... 01:18:55.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:18:55.067 05:13:46 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:55.067 05:13:46 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:55.067 05:13:46 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:18:55.325 05:13:46 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@345 -- # : 1 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:55.325 05:13:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:55.326 05:13:46 alias_rpc -- scripts/common.sh@368 -- # return 0 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:55.326 --rc genhtml_branch_coverage=1 01:18:55.326 --rc genhtml_function_coverage=1 01:18:55.326 --rc genhtml_legend=1 01:18:55.326 --rc geninfo_all_blocks=1 01:18:55.326 --rc geninfo_unexecuted_blocks=1 01:18:55.326 01:18:55.326 ' 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:55.326 --rc genhtml_branch_coverage=1 01:18:55.326 --rc genhtml_function_coverage=1 01:18:55.326 --rc genhtml_legend=1 01:18:55.326 --rc geninfo_all_blocks=1 01:18:55.326 --rc geninfo_unexecuted_blocks=1 01:18:55.326 01:18:55.326 ' 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:55.326 --rc genhtml_branch_coverage=1 01:18:55.326 --rc genhtml_function_coverage=1 01:18:55.326 --rc genhtml_legend=1 01:18:55.326 --rc geninfo_all_blocks=1 01:18:55.326 --rc geninfo_unexecuted_blocks=1 01:18:55.326 01:18:55.326 ' 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:55.326 --rc genhtml_branch_coverage=1 01:18:55.326 --rc genhtml_function_coverage=1 01:18:55.326 --rc genhtml_legend=1 01:18:55.326 --rc geninfo_all_blocks=1 01:18:55.326 --rc geninfo_unexecuted_blocks=1 01:18:55.326 01:18:55.326 ' 01:18:55.326 05:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:18:55.326 05:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57569 01:18:55.326 05:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:18:55.326 05:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57569 01:18:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57569 ']' 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:55.326 05:13:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:55.326 [2024-12-09 05:13:46.876028] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:55.326 [2024-12-09 05:13:46.876695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57569 ] 01:18:55.585 [2024-12-09 05:13:47.041438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:55.585 [2024-12-09 05:13:47.171865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:56.520 05:13:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:56.520 05:13:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 01:18:56.520 05:13:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:18:56.779 05:13:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57569 01:18:56.779 05:13:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57569 ']' 01:18:56.779 05:13:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57569 01:18:56.779 05:13:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 01:18:56.779 05:13:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:56.779 05:13:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57569 01:18:57.038 05:13:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:57.038 killing process with pid 57569 01:18:57.038 05:13:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:57.038 05:13:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57569' 01:18:57.038 05:13:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 57569 01:18:57.038 05:13:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 57569 01:18:59.573 01:18:59.573 real 0m4.043s 01:18:59.573 user 0m3.993s 01:18:59.573 sys 0m0.763s 01:18:59.573 ************************************ 01:18:59.573 END TEST alias_rpc 01:18:59.573 ************************************ 01:18:59.573 05:13:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:59.573 05:13:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:18:59.573 05:13:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 01:18:59.573 05:13:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:18:59.573 05:13:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:59.573 05:13:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:59.573 05:13:50 -- common/autotest_common.sh@10 -- # set +x 01:18:59.573 ************************************ 01:18:59.573 START TEST spdkcli_tcp 01:18:59.573 ************************************ 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:18:59.573 * Looking for test storage... 01:18:59.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:59.573 05:13:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:59.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:59.573 --rc genhtml_branch_coverage=1 01:18:59.573 --rc genhtml_function_coverage=1 01:18:59.573 --rc genhtml_legend=1 01:18:59.573 --rc geninfo_all_blocks=1 01:18:59.573 --rc geninfo_unexecuted_blocks=1 01:18:59.573 01:18:59.573 ' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:59.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:59.573 --rc genhtml_branch_coverage=1 01:18:59.573 --rc genhtml_function_coverage=1 01:18:59.573 --rc genhtml_legend=1 01:18:59.573 --rc geninfo_all_blocks=1 01:18:59.573 --rc geninfo_unexecuted_blocks=1 01:18:59.573 01:18:59.573 ' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:59.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:59.573 --rc genhtml_branch_coverage=1 01:18:59.573 --rc genhtml_function_coverage=1 01:18:59.573 --rc genhtml_legend=1 01:18:59.573 --rc geninfo_all_blocks=1 01:18:59.573 --rc geninfo_unexecuted_blocks=1 01:18:59.573 01:18:59.573 ' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:59.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:59.573 --rc genhtml_branch_coverage=1 01:18:59.573 --rc genhtml_function_coverage=1 01:18:59.573 --rc genhtml_legend=1 01:18:59.573 --rc geninfo_all_blocks=1 01:18:59.573 --rc geninfo_unexecuted_blocks=1 01:18:59.573 01:18:59.573 ' 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57676 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57676 01:18:59.573 05:13:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57676 ']' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:59.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:59.573 05:13:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:18:59.573 [2024-12-09 05:13:50.954842] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:59.573 [2024-12-09 05:13:50.955032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57676 ] 01:18:59.573 [2024-12-09 05:13:51.135162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:18:59.831 [2024-12-09 05:13:51.271645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:59.831 [2024-12-09 05:13:51.271654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:00.766 05:13:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:00.766 05:13:52 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 01:19:00.766 05:13:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57693 01:19:00.766 05:13:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 01:19:00.766 05:13:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 01:19:01.024 [ 01:19:01.024 "bdev_malloc_delete", 01:19:01.024 "bdev_malloc_create", 01:19:01.024 "bdev_null_resize", 01:19:01.024 "bdev_null_delete", 01:19:01.024 "bdev_null_create", 01:19:01.024 "bdev_nvme_cuse_unregister", 01:19:01.024 "bdev_nvme_cuse_register", 01:19:01.024 "bdev_opal_new_user", 01:19:01.024 "bdev_opal_set_lock_state", 01:19:01.024 "bdev_opal_delete", 01:19:01.024 "bdev_opal_get_info", 01:19:01.024 "bdev_opal_create", 01:19:01.024 "bdev_nvme_opal_revert", 01:19:01.024 "bdev_nvme_opal_init", 01:19:01.024 "bdev_nvme_send_cmd", 01:19:01.024 "bdev_nvme_set_keys", 01:19:01.024 "bdev_nvme_get_path_iostat", 01:19:01.024 "bdev_nvme_get_mdns_discovery_info", 01:19:01.024 "bdev_nvme_stop_mdns_discovery", 01:19:01.024 "bdev_nvme_start_mdns_discovery", 01:19:01.024 "bdev_nvme_set_multipath_policy", 01:19:01.024 "bdev_nvme_set_preferred_path", 01:19:01.024 "bdev_nvme_get_io_paths", 01:19:01.024 "bdev_nvme_remove_error_injection", 01:19:01.024 "bdev_nvme_add_error_injection", 01:19:01.024 "bdev_nvme_get_discovery_info", 01:19:01.024 "bdev_nvme_stop_discovery", 01:19:01.024 "bdev_nvme_start_discovery", 01:19:01.024 "bdev_nvme_get_controller_health_info", 01:19:01.024 "bdev_nvme_disable_controller", 01:19:01.024 "bdev_nvme_enable_controller", 01:19:01.024 "bdev_nvme_reset_controller", 01:19:01.024 "bdev_nvme_get_transport_statistics", 01:19:01.024 "bdev_nvme_apply_firmware", 01:19:01.024 "bdev_nvme_detach_controller", 01:19:01.025 "bdev_nvme_get_controllers", 01:19:01.025 "bdev_nvme_attach_controller", 01:19:01.025 "bdev_nvme_set_hotplug", 01:19:01.025 "bdev_nvme_set_options", 01:19:01.025 "bdev_passthru_delete", 01:19:01.025 "bdev_passthru_create", 01:19:01.025 "bdev_lvol_set_parent_bdev", 01:19:01.025 "bdev_lvol_set_parent", 01:19:01.025 "bdev_lvol_check_shallow_copy", 01:19:01.025 "bdev_lvol_start_shallow_copy", 01:19:01.025 "bdev_lvol_grow_lvstore", 01:19:01.025 "bdev_lvol_get_lvols", 01:19:01.025 "bdev_lvol_get_lvstores", 01:19:01.025 "bdev_lvol_delete", 01:19:01.025 "bdev_lvol_set_read_only", 01:19:01.025 "bdev_lvol_resize", 01:19:01.025 "bdev_lvol_decouple_parent", 01:19:01.025 "bdev_lvol_inflate", 01:19:01.025 "bdev_lvol_rename", 01:19:01.025 "bdev_lvol_clone_bdev", 01:19:01.025 "bdev_lvol_clone", 01:19:01.025 "bdev_lvol_snapshot", 01:19:01.025 "bdev_lvol_create", 01:19:01.025 "bdev_lvol_delete_lvstore", 01:19:01.025 "bdev_lvol_rename_lvstore", 01:19:01.025 "bdev_lvol_create_lvstore", 01:19:01.025 "bdev_raid_set_options", 01:19:01.025 "bdev_raid_remove_base_bdev", 01:19:01.025 "bdev_raid_add_base_bdev", 01:19:01.025 "bdev_raid_delete", 01:19:01.025 "bdev_raid_create", 01:19:01.025 "bdev_raid_get_bdevs", 01:19:01.025 "bdev_error_inject_error", 01:19:01.025 "bdev_error_delete", 01:19:01.025 "bdev_error_create", 01:19:01.025 "bdev_split_delete", 01:19:01.025 "bdev_split_create", 01:19:01.025 "bdev_delay_delete", 01:19:01.025 "bdev_delay_create", 01:19:01.025 "bdev_delay_update_latency", 01:19:01.025 "bdev_zone_block_delete", 01:19:01.025 "bdev_zone_block_create", 01:19:01.025 "blobfs_create", 01:19:01.025 "blobfs_detect", 01:19:01.025 "blobfs_set_cache_size", 01:19:01.025 "bdev_aio_delete", 01:19:01.025 "bdev_aio_rescan", 01:19:01.025 "bdev_aio_create", 01:19:01.025 "bdev_ftl_set_property", 01:19:01.025 "bdev_ftl_get_properties", 01:19:01.025 "bdev_ftl_get_stats", 01:19:01.025 "bdev_ftl_unmap", 01:19:01.025 "bdev_ftl_unload", 01:19:01.025 "bdev_ftl_delete", 01:19:01.025 "bdev_ftl_load", 01:19:01.025 "bdev_ftl_create", 01:19:01.025 "bdev_virtio_attach_controller", 01:19:01.025 "bdev_virtio_scsi_get_devices", 01:19:01.025 "bdev_virtio_detach_controller", 01:19:01.025 "bdev_virtio_blk_set_hotplug", 01:19:01.025 "bdev_iscsi_delete", 01:19:01.025 "bdev_iscsi_create", 01:19:01.025 "bdev_iscsi_set_options", 01:19:01.025 "accel_error_inject_error", 01:19:01.025 "ioat_scan_accel_module", 01:19:01.025 "dsa_scan_accel_module", 01:19:01.025 "iaa_scan_accel_module", 01:19:01.025 "keyring_file_remove_key", 01:19:01.025 "keyring_file_add_key", 01:19:01.025 "keyring_linux_set_options", 01:19:01.025 "fsdev_aio_delete", 01:19:01.025 "fsdev_aio_create", 01:19:01.025 "iscsi_get_histogram", 01:19:01.025 "iscsi_enable_histogram", 01:19:01.025 "iscsi_set_options", 01:19:01.025 "iscsi_get_auth_groups", 01:19:01.025 "iscsi_auth_group_remove_secret", 01:19:01.025 "iscsi_auth_group_add_secret", 01:19:01.025 "iscsi_delete_auth_group", 01:19:01.025 "iscsi_create_auth_group", 01:19:01.025 "iscsi_set_discovery_auth", 01:19:01.025 "iscsi_get_options", 01:19:01.025 "iscsi_target_node_request_logout", 01:19:01.025 "iscsi_target_node_set_redirect", 01:19:01.025 "iscsi_target_node_set_auth", 01:19:01.025 "iscsi_target_node_add_lun", 01:19:01.025 "iscsi_get_stats", 01:19:01.025 "iscsi_get_connections", 01:19:01.025 "iscsi_portal_group_set_auth", 01:19:01.025 "iscsi_start_portal_group", 01:19:01.025 "iscsi_delete_portal_group", 01:19:01.025 "iscsi_create_portal_group", 01:19:01.025 "iscsi_get_portal_groups", 01:19:01.025 "iscsi_delete_target_node", 01:19:01.025 "iscsi_target_node_remove_pg_ig_maps", 01:19:01.025 "iscsi_target_node_add_pg_ig_maps", 01:19:01.025 "iscsi_create_target_node", 01:19:01.025 "iscsi_get_target_nodes", 01:19:01.025 "iscsi_delete_initiator_group", 01:19:01.025 "iscsi_initiator_group_remove_initiators", 01:19:01.025 "iscsi_initiator_group_add_initiators", 01:19:01.025 "iscsi_create_initiator_group", 01:19:01.025 "iscsi_get_initiator_groups", 01:19:01.025 "nvmf_set_crdt", 01:19:01.025 "nvmf_set_config", 01:19:01.025 "nvmf_set_max_subsystems", 01:19:01.025 "nvmf_stop_mdns_prr", 01:19:01.025 "nvmf_publish_mdns_prr", 01:19:01.025 "nvmf_subsystem_get_listeners", 01:19:01.025 "nvmf_subsystem_get_qpairs", 01:19:01.025 "nvmf_subsystem_get_controllers", 01:19:01.025 "nvmf_get_stats", 01:19:01.025 "nvmf_get_transports", 01:19:01.025 "nvmf_create_transport", 01:19:01.025 "nvmf_get_targets", 01:19:01.025 "nvmf_delete_target", 01:19:01.025 "nvmf_create_target", 01:19:01.025 "nvmf_subsystem_allow_any_host", 01:19:01.025 "nvmf_subsystem_set_keys", 01:19:01.025 "nvmf_subsystem_remove_host", 01:19:01.025 "nvmf_subsystem_add_host", 01:19:01.025 "nvmf_ns_remove_host", 01:19:01.025 "nvmf_ns_add_host", 01:19:01.025 "nvmf_subsystem_remove_ns", 01:19:01.025 "nvmf_subsystem_set_ns_ana_group", 01:19:01.025 "nvmf_subsystem_add_ns", 01:19:01.025 "nvmf_subsystem_listener_set_ana_state", 01:19:01.025 "nvmf_discovery_get_referrals", 01:19:01.025 "nvmf_discovery_remove_referral", 01:19:01.025 "nvmf_discovery_add_referral", 01:19:01.025 "nvmf_subsystem_remove_listener", 01:19:01.025 "nvmf_subsystem_add_listener", 01:19:01.025 "nvmf_delete_subsystem", 01:19:01.025 "nvmf_create_subsystem", 01:19:01.025 "nvmf_get_subsystems", 01:19:01.025 "env_dpdk_get_mem_stats", 01:19:01.025 "nbd_get_disks", 01:19:01.025 "nbd_stop_disk", 01:19:01.025 "nbd_start_disk", 01:19:01.025 "ublk_recover_disk", 01:19:01.025 "ublk_get_disks", 01:19:01.025 "ublk_stop_disk", 01:19:01.025 "ublk_start_disk", 01:19:01.025 "ublk_destroy_target", 01:19:01.025 "ublk_create_target", 01:19:01.025 "virtio_blk_create_transport", 01:19:01.025 "virtio_blk_get_transports", 01:19:01.025 "vhost_controller_set_coalescing", 01:19:01.025 "vhost_get_controllers", 01:19:01.025 "vhost_delete_controller", 01:19:01.025 "vhost_create_blk_controller", 01:19:01.025 "vhost_scsi_controller_remove_target", 01:19:01.025 "vhost_scsi_controller_add_target", 01:19:01.025 "vhost_start_scsi_controller", 01:19:01.025 "vhost_create_scsi_controller", 01:19:01.025 "thread_set_cpumask", 01:19:01.025 "scheduler_set_options", 01:19:01.025 "framework_get_governor", 01:19:01.025 "framework_get_scheduler", 01:19:01.025 "framework_set_scheduler", 01:19:01.025 "framework_get_reactors", 01:19:01.025 "thread_get_io_channels", 01:19:01.025 "thread_get_pollers", 01:19:01.025 "thread_get_stats", 01:19:01.025 "framework_monitor_context_switch", 01:19:01.025 "spdk_kill_instance", 01:19:01.025 "log_enable_timestamps", 01:19:01.025 "log_get_flags", 01:19:01.025 "log_clear_flag", 01:19:01.025 "log_set_flag", 01:19:01.025 "log_get_level", 01:19:01.025 "log_set_level", 01:19:01.025 "log_get_print_level", 01:19:01.025 "log_set_print_level", 01:19:01.025 "framework_enable_cpumask_locks", 01:19:01.025 "framework_disable_cpumask_locks", 01:19:01.025 "framework_wait_init", 01:19:01.025 "framework_start_init", 01:19:01.025 "scsi_get_devices", 01:19:01.025 "bdev_get_histogram", 01:19:01.025 "bdev_enable_histogram", 01:19:01.026 "bdev_set_qos_limit", 01:19:01.026 "bdev_set_qd_sampling_period", 01:19:01.026 "bdev_get_bdevs", 01:19:01.026 "bdev_reset_iostat", 01:19:01.026 "bdev_get_iostat", 01:19:01.026 "bdev_examine", 01:19:01.026 "bdev_wait_for_examine", 01:19:01.026 "bdev_set_options", 01:19:01.026 "accel_get_stats", 01:19:01.026 "accel_set_options", 01:19:01.026 "accel_set_driver", 01:19:01.026 "accel_crypto_key_destroy", 01:19:01.026 "accel_crypto_keys_get", 01:19:01.026 "accel_crypto_key_create", 01:19:01.026 "accel_assign_opc", 01:19:01.026 "accel_get_module_info", 01:19:01.026 "accel_get_opc_assignments", 01:19:01.026 "vmd_rescan", 01:19:01.026 "vmd_remove_device", 01:19:01.026 "vmd_enable", 01:19:01.026 "sock_get_default_impl", 01:19:01.026 "sock_set_default_impl", 01:19:01.026 "sock_impl_set_options", 01:19:01.026 "sock_impl_get_options", 01:19:01.026 "iobuf_get_stats", 01:19:01.026 "iobuf_set_options", 01:19:01.026 "keyring_get_keys", 01:19:01.026 "framework_get_pci_devices", 01:19:01.026 "framework_get_config", 01:19:01.026 "framework_get_subsystems", 01:19:01.026 "fsdev_set_opts", 01:19:01.026 "fsdev_get_opts", 01:19:01.026 "trace_get_info", 01:19:01.026 "trace_get_tpoint_group_mask", 01:19:01.026 "trace_disable_tpoint_group", 01:19:01.026 "trace_enable_tpoint_group", 01:19:01.026 "trace_clear_tpoint_mask", 01:19:01.026 "trace_set_tpoint_mask", 01:19:01.026 "notify_get_notifications", 01:19:01.026 "notify_get_types", 01:19:01.026 "spdk_get_version", 01:19:01.026 "rpc_get_methods" 01:19:01.026 ] 01:19:01.026 05:13:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:01.026 05:13:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:19:01.026 05:13:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57676 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57676 ']' 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57676 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57676 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:01.026 killing process with pid 57676 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57676' 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57676 01:19:01.026 05:13:52 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57676 01:19:03.586 01:19:03.586 real 0m4.380s 01:19:03.586 user 0m7.780s 01:19:03.586 sys 0m0.788s 01:19:03.586 ************************************ 01:19:03.586 05:13:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:03.586 05:13:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:03.586 END TEST spdkcli_tcp 01:19:03.586 ************************************ 01:19:03.586 05:13:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:19:03.586 05:13:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:03.586 05:13:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:03.586 05:13:55 -- common/autotest_common.sh@10 -- # set +x 01:19:03.586 ************************************ 01:19:03.586 START TEST dpdk_mem_utility 01:19:03.586 ************************************ 01:19:03.586 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:19:03.586 * Looking for test storage... 01:19:03.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:19:03.586 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:03.586 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 01:19:03.586 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:03.844 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 01:19:03.844 05:13:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:03.845 05:13:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:03.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:03.845 --rc genhtml_branch_coverage=1 01:19:03.845 --rc genhtml_function_coverage=1 01:19:03.845 --rc genhtml_legend=1 01:19:03.845 --rc geninfo_all_blocks=1 01:19:03.845 --rc geninfo_unexecuted_blocks=1 01:19:03.845 01:19:03.845 ' 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:03.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:03.845 --rc genhtml_branch_coverage=1 01:19:03.845 --rc genhtml_function_coverage=1 01:19:03.845 --rc genhtml_legend=1 01:19:03.845 --rc geninfo_all_blocks=1 01:19:03.845 --rc geninfo_unexecuted_blocks=1 01:19:03.845 01:19:03.845 ' 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:03.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:03.845 --rc genhtml_branch_coverage=1 01:19:03.845 --rc genhtml_function_coverage=1 01:19:03.845 --rc genhtml_legend=1 01:19:03.845 --rc geninfo_all_blocks=1 01:19:03.845 --rc geninfo_unexecuted_blocks=1 01:19:03.845 01:19:03.845 ' 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:03.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:03.845 --rc genhtml_branch_coverage=1 01:19:03.845 --rc genhtml_function_coverage=1 01:19:03.845 --rc genhtml_legend=1 01:19:03.845 --rc geninfo_all_blocks=1 01:19:03.845 --rc geninfo_unexecuted_blocks=1 01:19:03.845 01:19:03.845 ' 01:19:03.845 05:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:19:03.845 05:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57798 01:19:03.845 05:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57798 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57798 ']' 01:19:03.845 05:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:03.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:03.845 05:13:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:19:03.845 [2024-12-09 05:13:55.410037] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:03.845 [2024-12-09 05:13:55.410726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 01:19:04.103 [2024-12-09 05:13:55.597679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:04.361 [2024-12-09 05:13:55.747251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:05.299 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:05.299 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 01:19:05.299 05:13:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:19:05.299 05:13:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:19:05.299 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:05.299 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:19:05.299 { 01:19:05.299 "filename": "/tmp/spdk_mem_dump.txt" 01:19:05.299 } 01:19:05.299 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:05.299 05:13:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:19:05.299 DPDK memory size 824.000000 MiB in 1 heap(s) 01:19:05.299 1 heaps totaling size 824.000000 MiB 01:19:05.299 size: 824.000000 MiB heap id: 0 01:19:05.299 end heaps---------- 01:19:05.299 9 mempools totaling size 603.782043 MiB 01:19:05.299 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:19:05.299 size: 158.602051 MiB name: PDU_data_out_Pool 01:19:05.299 size: 100.555481 MiB name: bdev_io_57798 01:19:05.299 size: 50.003479 MiB name: msgpool_57798 01:19:05.299 size: 36.509338 MiB name: fsdev_io_57798 01:19:05.299 size: 21.763794 MiB name: PDU_Pool 01:19:05.299 size: 19.513306 MiB name: SCSI_TASK_Pool 01:19:05.299 size: 4.133484 MiB name: evtpool_57798 01:19:05.299 size: 0.026123 MiB name: Session_Pool 01:19:05.299 end mempools------- 01:19:05.299 6 memzones totaling size 4.142822 MiB 01:19:05.299 size: 1.000366 MiB name: RG_ring_0_57798 01:19:05.299 size: 1.000366 MiB name: RG_ring_1_57798 01:19:05.299 size: 1.000366 MiB name: RG_ring_4_57798 01:19:05.299 size: 1.000366 MiB name: RG_ring_5_57798 01:19:05.299 size: 0.125366 MiB name: RG_ring_2_57798 01:19:05.299 size: 0.015991 MiB name: RG_ring_3_57798 01:19:05.299 end memzones------- 01:19:05.299 05:13:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:19:05.299 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 01:19:05.299 list of free elements. size: 16.780396 MiB 01:19:05.299 element at address: 0x200006400000 with size: 1.995972 MiB 01:19:05.299 element at address: 0x20000a600000 with size: 1.995972 MiB 01:19:05.299 element at address: 0x200003e00000 with size: 1.991028 MiB 01:19:05.299 element at address: 0x200019500040 with size: 0.999939 MiB 01:19:05.299 element at address: 0x200019900040 with size: 0.999939 MiB 01:19:05.299 element at address: 0x200019a00000 with size: 0.999084 MiB 01:19:05.299 element at address: 0x200032600000 with size: 0.994324 MiB 01:19:05.299 element at address: 0x200000400000 with size: 0.992004 MiB 01:19:05.299 element at address: 0x200019200000 with size: 0.959656 MiB 01:19:05.299 element at address: 0x200019d00040 with size: 0.936401 MiB 01:19:05.299 element at address: 0x200000200000 with size: 0.716980 MiB 01:19:05.299 element at address: 0x20001b400000 with size: 0.561951 MiB 01:19:05.299 element at address: 0x200000c00000 with size: 0.489197 MiB 01:19:05.299 element at address: 0x200019600000 with size: 0.487976 MiB 01:19:05.299 element at address: 0x200019e00000 with size: 0.485413 MiB 01:19:05.299 element at address: 0x200012c00000 with size: 0.433228 MiB 01:19:05.299 element at address: 0x200028800000 with size: 0.390442 MiB 01:19:05.299 element at address: 0x200000800000 with size: 0.350891 MiB 01:19:05.299 list of standard malloc elements. size: 199.288696 MiB 01:19:05.299 element at address: 0x20000a7fef80 with size: 132.000183 MiB 01:19:05.299 element at address: 0x2000065fef80 with size: 64.000183 MiB 01:19:05.299 element at address: 0x2000193fff80 with size: 1.000183 MiB 01:19:05.299 element at address: 0x2000197fff80 with size: 1.000183 MiB 01:19:05.299 element at address: 0x200019bfff80 with size: 1.000183 MiB 01:19:05.299 element at address: 0x2000003d9e80 with size: 0.140808 MiB 01:19:05.299 element at address: 0x200019deff40 with size: 0.062683 MiB 01:19:05.299 element at address: 0x2000003fdf40 with size: 0.007996 MiB 01:19:05.299 element at address: 0x20000a5ff040 with size: 0.000427 MiB 01:19:05.299 element at address: 0x200019defdc0 with size: 0.000366 MiB 01:19:05.299 element at address: 0x200012bff040 with size: 0.000305 MiB 01:19:05.299 element at address: 0x2000002d7b00 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000003d9d80 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fdf40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe040 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe140 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe240 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe340 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe440 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe540 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe640 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe740 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe840 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fe940 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fea40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004feb40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fec40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fed40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fee40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004fef40 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff040 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff140 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff240 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff340 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff440 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff540 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff640 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff740 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff840 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ff940 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e1c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e2c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e3c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e4c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e5c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e6c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e7c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e8c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087e9c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087eac0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087ebc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087ecc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087edc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087eec0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087efc0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087f0c0 with size: 0.000244 MiB 01:19:05.299 element at address: 0x20000087f1c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000087f2c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000087f3c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000087f4c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x2000008ff800 with size: 0.000244 MiB 01:19:05.300 element at address: 0x2000008ffa80 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7dac0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7dec0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7eac0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000cfef00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200000cff000 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff200 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff300 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff400 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff500 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff600 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff700 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff800 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ff900 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20000a5fff00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff180 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff280 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff380 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff480 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff580 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff680 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff780 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff880 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bff980 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bffa80 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bffb80 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bffc80 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012bfff00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6ee80 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6ef80 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f080 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f180 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f280 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f380 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f480 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f580 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f680 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f780 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012c6f880 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200012cefbc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x2000192fdd00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967cec0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967cfc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d0c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d1c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d2c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d3c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d4c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d5c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d6c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d7c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d8c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001967d9c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x2000196fdd00 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200019affc40 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200019defbc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200019defcc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x200019ebc680 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b48fec0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4900c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4901c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4902c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4903c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4904c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4905c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4906c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4907c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4908c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4909c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b490ac0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b490bc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b490cc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b490dc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b490ec0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b490fc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4910c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4911c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4912c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4913c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4914c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4915c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4916c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4917c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4918c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4919c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b491ac0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b491bc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b491cc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b491dc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b491ec0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b491fc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4920c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4921c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4922c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4923c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4924c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4925c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4926c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4927c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4928c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4929c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b492ac0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b492bc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b492cc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b492dc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b492ec0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b492fc0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4930c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4931c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4932c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4933c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4934c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4935c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4936c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4937c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4938c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b4939c0 with size: 0.000244 MiB 01:19:05.300 element at address: 0x20001b493ac0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b493bc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b493cc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b493dc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b493ec0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b493fc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4940c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4941c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4942c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4943c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4944c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4945c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4946c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4947c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4948c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4949c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b494ac0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b494bc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b494cc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b494dc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b494ec0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b494fc0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4950c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4951c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4952c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20001b4953c0 with size: 0.000244 MiB 01:19:05.301 element at address: 0x200028863f40 with size: 0.000244 MiB 01:19:05.301 element at address: 0x200028864040 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ad00 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886af80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b080 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b180 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b280 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b380 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b480 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b580 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b680 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b780 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b880 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886b980 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ba80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886bb80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886bc80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886bd80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886be80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886bf80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c080 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c180 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c280 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c380 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c480 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c580 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c680 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c780 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c880 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886c980 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ca80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886cb80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886cc80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886cd80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ce80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886cf80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d080 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d180 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d280 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d380 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d480 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d580 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d680 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d780 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d880 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886d980 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886da80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886db80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886dc80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886dd80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886de80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886df80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e080 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e180 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e280 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e380 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e480 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e580 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e680 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e780 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e880 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886e980 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ea80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886eb80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ec80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ed80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ee80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886ef80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f080 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f180 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f280 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f380 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f480 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f580 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f680 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f780 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f880 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886f980 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886fa80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886fb80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886fc80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886fd80 with size: 0.000244 MiB 01:19:05.301 element at address: 0x20002886fe80 with size: 0.000244 MiB 01:19:05.301 list of memzone associated elements. size: 607.930908 MiB 01:19:05.301 element at address: 0x20001b4954c0 with size: 211.416809 MiB 01:19:05.301 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:19:05.301 element at address: 0x20002886ff80 with size: 157.562622 MiB 01:19:05.301 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:19:05.301 element at address: 0x200012df1e40 with size: 100.055115 MiB 01:19:05.301 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57798_0 01:19:05.301 element at address: 0x200000dff340 with size: 48.003113 MiB 01:19:05.301 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57798_0 01:19:05.301 element at address: 0x200003ffdb40 with size: 36.008972 MiB 01:19:05.301 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57798_0 01:19:05.301 element at address: 0x200019fbe900 with size: 20.255615 MiB 01:19:05.301 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:19:05.301 element at address: 0x2000327feb00 with size: 18.005127 MiB 01:19:05.301 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:19:05.301 element at address: 0x2000004ffec0 with size: 3.000305 MiB 01:19:05.301 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57798_0 01:19:05.301 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 01:19:05.301 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57798 01:19:05.301 element at address: 0x2000002d7c00 with size: 1.008179 MiB 01:19:05.301 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57798 01:19:05.301 element at address: 0x2000196fde00 with size: 1.008179 MiB 01:19:05.301 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:19:05.301 element at address: 0x200019ebc780 with size: 1.008179 MiB 01:19:05.301 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:19:05.301 element at address: 0x2000192fde00 with size: 1.008179 MiB 01:19:05.301 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:19:05.301 element at address: 0x200012cefcc0 with size: 1.008179 MiB 01:19:05.301 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:19:05.301 element at address: 0x200000cff100 with size: 1.000549 MiB 01:19:05.301 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57798 01:19:05.301 element at address: 0x2000008ffb80 with size: 1.000549 MiB 01:19:05.301 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57798 01:19:05.301 element at address: 0x200019affd40 with size: 1.000549 MiB 01:19:05.302 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57798 01:19:05.302 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 01:19:05.302 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57798 01:19:05.302 element at address: 0x20000087f5c0 with size: 0.500549 MiB 01:19:05.302 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57798 01:19:05.302 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 01:19:05.302 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57798 01:19:05.302 element at address: 0x20001967dac0 with size: 0.500549 MiB 01:19:05.302 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:19:05.302 element at address: 0x200012c6f980 with size: 0.500549 MiB 01:19:05.302 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:19:05.302 element at address: 0x200019e7c440 with size: 0.250549 MiB 01:19:05.302 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:19:05.302 element at address: 0x2000002b78c0 with size: 0.125549 MiB 01:19:05.302 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57798 01:19:05.302 element at address: 0x20000085df80 with size: 0.125549 MiB 01:19:05.302 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57798 01:19:05.302 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 01:19:05.302 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:19:05.302 element at address: 0x200028864140 with size: 0.023804 MiB 01:19:05.302 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:19:05.302 element at address: 0x200000859d40 with size: 0.016174 MiB 01:19:05.302 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57798 01:19:05.302 element at address: 0x20002886a2c0 with size: 0.002502 MiB 01:19:05.302 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:19:05.302 element at address: 0x2000004ffa40 with size: 0.000366 MiB 01:19:05.302 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57798 01:19:05.302 element at address: 0x2000008ff900 with size: 0.000366 MiB 01:19:05.302 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57798 01:19:05.302 element at address: 0x200012bffd80 with size: 0.000366 MiB 01:19:05.302 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57798 01:19:05.302 element at address: 0x20002886ae00 with size: 0.000366 MiB 01:19:05.302 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:19:05.302 05:13:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:19:05.302 05:13:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57798 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57798 ']' 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57798 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57798 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57798' 01:19:05.302 killing process with pid 57798 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57798 01:19:05.302 05:13:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57798 01:19:07.835 01:19:07.835 real 0m4.239s 01:19:07.835 user 0m4.056s 01:19:07.835 sys 0m0.782s 01:19:07.835 05:13:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:07.835 05:13:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:19:07.835 ************************************ 01:19:07.835 END TEST dpdk_mem_utility 01:19:07.835 ************************************ 01:19:07.835 05:13:59 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:19:07.835 05:13:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:07.835 05:13:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:07.835 05:13:59 -- common/autotest_common.sh@10 -- # set +x 01:19:07.835 ************************************ 01:19:07.835 START TEST event 01:19:07.835 ************************************ 01:19:07.835 05:13:59 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:19:07.835 * Looking for test storage... 01:19:07.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:19:07.835 05:13:59 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:07.835 05:13:59 event -- common/autotest_common.sh@1693 -- # lcov --version 01:19:07.835 05:13:59 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:08.093 05:13:59 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:08.093 05:13:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:08.093 05:13:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:08.093 05:13:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:08.093 05:13:59 event -- scripts/common.sh@336 -- # IFS=.-: 01:19:08.093 05:13:59 event -- scripts/common.sh@336 -- # read -ra ver1 01:19:08.093 05:13:59 event -- scripts/common.sh@337 -- # IFS=.-: 01:19:08.093 05:13:59 event -- scripts/common.sh@337 -- # read -ra ver2 01:19:08.093 05:13:59 event -- scripts/common.sh@338 -- # local 'op=<' 01:19:08.093 05:13:59 event -- scripts/common.sh@340 -- # ver1_l=2 01:19:08.093 05:13:59 event -- scripts/common.sh@341 -- # ver2_l=1 01:19:08.093 05:13:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:08.093 05:13:59 event -- scripts/common.sh@344 -- # case "$op" in 01:19:08.093 05:13:59 event -- scripts/common.sh@345 -- # : 1 01:19:08.093 05:13:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:08.093 05:13:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:08.093 05:13:59 event -- scripts/common.sh@365 -- # decimal 1 01:19:08.093 05:13:59 event -- scripts/common.sh@353 -- # local d=1 01:19:08.093 05:13:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:08.093 05:13:59 event -- scripts/common.sh@355 -- # echo 1 01:19:08.093 05:13:59 event -- scripts/common.sh@365 -- # ver1[v]=1 01:19:08.093 05:13:59 event -- scripts/common.sh@366 -- # decimal 2 01:19:08.093 05:13:59 event -- scripts/common.sh@353 -- # local d=2 01:19:08.093 05:13:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:08.093 05:13:59 event -- scripts/common.sh@355 -- # echo 2 01:19:08.093 05:13:59 event -- scripts/common.sh@366 -- # ver2[v]=2 01:19:08.093 05:13:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:08.093 05:13:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:08.093 05:13:59 event -- scripts/common.sh@368 -- # return 0 01:19:08.093 05:13:59 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:08.093 05:13:59 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:08.093 --rc genhtml_branch_coverage=1 01:19:08.093 --rc genhtml_function_coverage=1 01:19:08.093 --rc genhtml_legend=1 01:19:08.093 --rc geninfo_all_blocks=1 01:19:08.093 --rc geninfo_unexecuted_blocks=1 01:19:08.093 01:19:08.093 ' 01:19:08.093 05:13:59 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:08.093 --rc genhtml_branch_coverage=1 01:19:08.093 --rc genhtml_function_coverage=1 01:19:08.093 --rc genhtml_legend=1 01:19:08.093 --rc geninfo_all_blocks=1 01:19:08.093 --rc geninfo_unexecuted_blocks=1 01:19:08.093 01:19:08.093 ' 01:19:08.093 05:13:59 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:08.093 --rc genhtml_branch_coverage=1 01:19:08.093 --rc genhtml_function_coverage=1 01:19:08.093 --rc genhtml_legend=1 01:19:08.093 --rc geninfo_all_blocks=1 01:19:08.093 --rc geninfo_unexecuted_blocks=1 01:19:08.093 01:19:08.093 ' 01:19:08.093 05:13:59 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:08.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:08.093 --rc genhtml_branch_coverage=1 01:19:08.093 --rc genhtml_function_coverage=1 01:19:08.093 --rc genhtml_legend=1 01:19:08.093 --rc geninfo_all_blocks=1 01:19:08.093 --rc geninfo_unexecuted_blocks=1 01:19:08.093 01:19:08.093 ' 01:19:08.093 05:13:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:19:08.093 05:13:59 event -- bdev/nbd_common.sh@6 -- # set -e 01:19:08.094 05:13:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:19:08.094 05:13:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:19:08.094 05:13:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:08.094 05:13:59 event -- common/autotest_common.sh@10 -- # set +x 01:19:08.094 ************************************ 01:19:08.094 START TEST event_perf 01:19:08.094 ************************************ 01:19:08.094 05:13:59 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:19:08.094 Running I/O for 1 seconds...[2024-12-09 05:13:59.616009] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:08.094 [2024-12-09 05:13:59.616447] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57917 ] 01:19:08.352 [2024-12-09 05:13:59.790729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:08.352 [2024-12-09 05:13:59.948073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:08.352 [2024-12-09 05:13:59.948242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:08.352 [2024-12-09 05:13:59.948377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:08.352 [2024-12-09 05:13:59.948400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:09.754 Running I/O for 1 seconds... 01:19:09.755 lcore 0: 117378 01:19:09.755 lcore 1: 117379 01:19:09.755 lcore 2: 117379 01:19:09.755 lcore 3: 117380 01:19:09.755 done. 01:19:09.755 01:19:09.755 real 0m1.741s 01:19:09.755 user 0m4.466s 01:19:09.755 sys 0m0.144s 01:19:09.755 05:14:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:09.755 05:14:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:19:09.755 ************************************ 01:19:09.755 END TEST event_perf 01:19:09.755 ************************************ 01:19:09.755 05:14:01 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:19:09.755 05:14:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:19:09.755 05:14:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:09.755 05:14:01 event -- common/autotest_common.sh@10 -- # set +x 01:19:10.013 ************************************ 01:19:10.013 START TEST event_reactor 01:19:10.013 ************************************ 01:19:10.013 05:14:01 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:19:10.013 [2024-12-09 05:14:01.411815] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:10.013 [2024-12-09 05:14:01.412028] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 01:19:10.013 [2024-12-09 05:14:01.598342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:10.271 [2024-12-09 05:14:01.748115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:11.647 test_start 01:19:11.647 oneshot 01:19:11.647 tick 100 01:19:11.647 tick 100 01:19:11.647 tick 250 01:19:11.647 tick 100 01:19:11.647 tick 100 01:19:11.647 tick 100 01:19:11.647 tick 250 01:19:11.647 tick 500 01:19:11.647 tick 100 01:19:11.647 tick 100 01:19:11.647 tick 250 01:19:11.647 tick 100 01:19:11.647 tick 100 01:19:11.647 test_end 01:19:11.647 01:19:11.647 real 0m1.718s 01:19:11.647 user 0m1.479s 01:19:11.647 sys 0m0.127s 01:19:11.647 05:14:03 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:11.647 ************************************ 01:19:11.647 END TEST event_reactor 01:19:11.647 ************************************ 01:19:11.647 05:14:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:19:11.647 05:14:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:19:11.647 05:14:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:19:11.647 05:14:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:11.647 05:14:03 event -- common/autotest_common.sh@10 -- # set +x 01:19:11.647 ************************************ 01:19:11.647 START TEST event_reactor_perf 01:19:11.647 ************************************ 01:19:11.647 05:14:03 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:19:11.647 [2024-12-09 05:14:03.174974] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:11.647 [2024-12-09 05:14:03.175173] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57993 ] 01:19:11.906 [2024-12-09 05:14:03.356820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:11.906 [2024-12-09 05:14:03.507176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:13.284 test_start 01:19:13.284 test_end 01:19:13.284 Performance: 306609 events per second 01:19:13.284 01:19:13.284 real 0m1.703s 01:19:13.284 user 0m1.475s 01:19:13.284 sys 0m0.119s 01:19:13.284 05:14:04 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:13.284 05:14:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:19:13.284 ************************************ 01:19:13.284 END TEST event_reactor_perf 01:19:13.284 ************************************ 01:19:13.284 05:14:04 event -- event/event.sh@49 -- # uname -s 01:19:13.284 05:14:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:19:13.284 05:14:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:19:13.284 05:14:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:13.284 05:14:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:13.284 05:14:04 event -- common/autotest_common.sh@10 -- # set +x 01:19:13.284 ************************************ 01:19:13.284 START TEST event_scheduler 01:19:13.284 ************************************ 01:19:13.284 05:14:04 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:19:13.544 * Looking for test storage... 01:19:13.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:19:13.544 05:14:04 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:13.544 05:14:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 01:19:13.544 05:14:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:13.544 05:14:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:13.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.544 --rc genhtml_branch_coverage=1 01:19:13.544 --rc genhtml_function_coverage=1 01:19:13.544 --rc genhtml_legend=1 01:19:13.544 --rc geninfo_all_blocks=1 01:19:13.544 --rc geninfo_unexecuted_blocks=1 01:19:13.544 01:19:13.544 ' 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:13.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.544 --rc genhtml_branch_coverage=1 01:19:13.544 --rc genhtml_function_coverage=1 01:19:13.544 --rc genhtml_legend=1 01:19:13.544 --rc geninfo_all_blocks=1 01:19:13.544 --rc geninfo_unexecuted_blocks=1 01:19:13.544 01:19:13.544 ' 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:13.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.544 --rc genhtml_branch_coverage=1 01:19:13.544 --rc genhtml_function_coverage=1 01:19:13.544 --rc genhtml_legend=1 01:19:13.544 --rc geninfo_all_blocks=1 01:19:13.544 --rc geninfo_unexecuted_blocks=1 01:19:13.544 01:19:13.544 ' 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:13.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:13.544 --rc genhtml_branch_coverage=1 01:19:13.544 --rc genhtml_function_coverage=1 01:19:13.544 --rc genhtml_legend=1 01:19:13.544 --rc geninfo_all_blocks=1 01:19:13.544 --rc geninfo_unexecuted_blocks=1 01:19:13.544 01:19:13.544 ' 01:19:13.544 05:14:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:19:13.544 05:14:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58069 01:19:13.544 05:14:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:19:13.544 05:14:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58069 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58069 ']' 01:19:13.544 05:14:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:13.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:13.544 05:14:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:19:13.803 [2024-12-09 05:14:05.183146] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:13.803 [2024-12-09 05:14:05.183347] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58069 ] 01:19:13.803 [2024-12-09 05:14:05.373501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:14.061 [2024-12-09 05:14:05.541071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:14.061 [2024-12-09 05:14:05.541205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:14.061 [2024-12-09 05:14:05.541383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:14.061 [2024-12-09 05:14:05.541945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 01:19:14.628 05:14:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:19:14.628 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:19:14.628 POWER: Cannot set governor of lcore 0 to userspace 01:19:14.628 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:19:14.628 POWER: Cannot set governor of lcore 0 to performance 01:19:14.628 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:19:14.628 POWER: Cannot set governor of lcore 0 to userspace 01:19:14.628 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:19:14.628 POWER: Cannot set governor of lcore 0 to userspace 01:19:14.628 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 01:19:14.628 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:19:14.628 POWER: Unable to set Power Management Environment for lcore 0 01:19:14.628 [2024-12-09 05:14:06.180285] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 01:19:14.628 [2024-12-09 05:14:06.180315] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 01:19:14.628 [2024-12-09 05:14:06.180329] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 01:19:14.628 [2024-12-09 05:14:06.180390] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:19:14.628 [2024-12-09 05:14:06.180407] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:19:14.628 [2024-12-09 05:14:06.180421] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:14.628 05:14:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:14.628 05:14:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 [2024-12-09 05:14:06.508525] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:19:15.196 05:14:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:19:15.196 05:14:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:15.196 05:14:06 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 ************************************ 01:19:15.196 START TEST scheduler_create_thread 01:19:15.196 ************************************ 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 2 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 3 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 4 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 5 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 6 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 7 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 8 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 9 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.196 10 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.196 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.197 ************************************ 01:19:15.197 END TEST scheduler_create_thread 01:19:15.197 ************************************ 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:15.197 01:19:15.197 real 0m0.110s 01:19:15.197 user 0m0.017s 01:19:15.197 sys 0m0.005s 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:15.197 05:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:19:15.197 05:14:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:19:15.197 05:14:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58069 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58069 ']' 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58069 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58069 01:19:15.197 killing process with pid 58069 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58069' 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58069 01:19:15.197 05:14:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58069 01:19:15.764 [2024-12-09 05:14:07.115420] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:19:16.700 01:19:16.700 real 0m3.372s 01:19:16.700 user 0m5.329s 01:19:16.700 sys 0m0.561s 01:19:16.700 05:14:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:16.700 05:14:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:19:16.700 ************************************ 01:19:16.700 END TEST event_scheduler 01:19:16.700 ************************************ 01:19:16.700 05:14:08 event -- event/event.sh@51 -- # modprobe -n nbd 01:19:16.700 05:14:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:19:16.700 05:14:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:16.700 05:14:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:16.700 05:14:08 event -- common/autotest_common.sh@10 -- # set +x 01:19:16.959 ************************************ 01:19:16.959 START TEST app_repeat 01:19:16.959 ************************************ 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:19:16.959 Process app_repeat pid: 58153 01:19:16.959 spdk_app_start Round 0 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58153 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58153' 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:19:16.959 05:14:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58153 /var/tmp/spdk-nbd.sock 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58153 ']' 01:19:16.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:16.959 05:14:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:19:16.959 [2024-12-09 05:14:08.389629] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:16.959 [2024-12-09 05:14:08.389820] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58153 ] 01:19:17.217 [2024-12-09 05:14:08.586113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:19:17.217 [2024-12-09 05:14:08.769506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:17.217 [2024-12-09 05:14:08.769567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:18.151 05:14:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:18.151 05:14:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:19:18.151 05:14:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:19:18.408 Malloc0 01:19:18.408 05:14:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:19:18.665 Malloc1 01:19:18.665 05:14:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:18.665 05:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:19:18.923 /dev/nbd0 01:19:18.923 05:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:19:18.923 05:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:19:18.923 1+0 records in 01:19:18.923 1+0 records out 01:19:18.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565474 s, 7.2 MB/s 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:19:18.923 05:14:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:19:18.923 05:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:19:18.923 05:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:18.923 05:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:19:19.488 /dev/nbd1 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:19:19.488 1+0 records in 01:19:19.488 1+0 records out 01:19:19.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444365 s, 9.2 MB/s 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:19:19.488 05:14:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:19.488 05:14:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:19:19.746 { 01:19:19.746 "nbd_device": "/dev/nbd0", 01:19:19.746 "bdev_name": "Malloc0" 01:19:19.746 }, 01:19:19.746 { 01:19:19.746 "nbd_device": "/dev/nbd1", 01:19:19.746 "bdev_name": "Malloc1" 01:19:19.746 } 01:19:19.746 ]' 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:19:19.746 { 01:19:19.746 "nbd_device": "/dev/nbd0", 01:19:19.746 "bdev_name": "Malloc0" 01:19:19.746 }, 01:19:19.746 { 01:19:19.746 "nbd_device": "/dev/nbd1", 01:19:19.746 "bdev_name": "Malloc1" 01:19:19.746 } 01:19:19.746 ]' 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:19:19.746 /dev/nbd1' 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:19:19.746 /dev/nbd1' 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:19:19.746 05:14:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:19:19.747 256+0 records in 01:19:19.747 256+0 records out 01:19:19.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649808 s, 161 MB/s 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:19:19.747 256+0 records in 01:19:19.747 256+0 records out 01:19:19.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300387 s, 34.9 MB/s 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:19:19.747 256+0 records in 01:19:19.747 256+0 records out 01:19:19.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350666 s, 29.9 MB/s 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:19:19.747 05:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:19:20.004 05:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:19:20.004 05:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:19:20.004 05:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:19:20.004 05:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:19:20.005 05:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:19:20.005 05:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:19:20.005 05:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:19:20.005 05:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:19:20.005 05:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:19:20.005 05:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:20.261 05:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:19:20.823 05:14:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:19:20.823 05:14:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:19:21.080 05:14:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:19:22.505 [2024-12-09 05:14:13.952491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:19:22.505 [2024-12-09 05:14:14.088846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:22.505 [2024-12-09 05:14:14.088870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:22.764 [2024-12-09 05:14:14.297601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:19:22.764 [2024-12-09 05:14:14.297760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:19:24.136 spdk_app_start Round 1 01:19:24.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:19:24.136 05:14:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:19:24.136 05:14:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:19:24.136 05:14:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58153 /var/tmp/spdk-nbd.sock 01:19:24.136 05:14:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58153 ']' 01:19:24.136 05:14:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:19:24.136 05:14:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:24.136 05:14:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:19:24.137 05:14:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:24.137 05:14:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:19:24.394 05:14:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:24.394 05:14:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:19:24.394 05:14:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:19:24.959 Malloc0 01:19:24.959 05:14:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:19:25.218 Malloc1 01:19:25.218 05:14:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:25.218 05:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:19:25.476 /dev/nbd0 01:19:25.476 05:14:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:19:25.476 05:14:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:19:25.476 1+0 records in 01:19:25.476 1+0 records out 01:19:25.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359504 s, 11.4 MB/s 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:19:25.476 05:14:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:19:25.476 05:14:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:19:25.476 05:14:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:25.476 05:14:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:19:25.736 /dev/nbd1 01:19:25.736 05:14:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:19:25.736 05:14:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:19:25.736 05:14:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:19:25.736 1+0 records in 01:19:25.736 1+0 records out 01:19:25.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030315 s, 13.5 MB/s 01:19:25.994 05:14:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:25.994 05:14:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:19:25.994 05:14:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:25.994 05:14:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:19:25.994 05:14:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:19:25.994 05:14:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:19:25.994 05:14:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:25.994 05:14:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:19:25.994 05:14:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:25.994 05:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:19:26.253 { 01:19:26.253 "nbd_device": "/dev/nbd0", 01:19:26.253 "bdev_name": "Malloc0" 01:19:26.253 }, 01:19:26.253 { 01:19:26.253 "nbd_device": "/dev/nbd1", 01:19:26.253 "bdev_name": "Malloc1" 01:19:26.253 } 01:19:26.253 ]' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:19:26.253 { 01:19:26.253 "nbd_device": "/dev/nbd0", 01:19:26.253 "bdev_name": "Malloc0" 01:19:26.253 }, 01:19:26.253 { 01:19:26.253 "nbd_device": "/dev/nbd1", 01:19:26.253 "bdev_name": "Malloc1" 01:19:26.253 } 01:19:26.253 ]' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:19:26.253 /dev/nbd1' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:19:26.253 /dev/nbd1' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:19:26.253 256+0 records in 01:19:26.253 256+0 records out 01:19:26.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00870039 s, 121 MB/s 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:19:26.253 256+0 records in 01:19:26.253 256+0 records out 01:19:26.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276814 s, 37.9 MB/s 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:19:26.253 256+0 records in 01:19:26.253 256+0 records out 01:19:26.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282423 s, 37.1 MB/s 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:19:26.253 05:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:19:26.511 05:14:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:26.770 05:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:19:27.337 05:14:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:19:27.337 05:14:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:19:27.594 05:14:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:19:28.965 [2024-12-09 05:14:20.259657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:19:28.965 [2024-12-09 05:14:20.386419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:28.965 [2024-12-09 05:14:20.386424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:29.222 [2024-12-09 05:14:20.583461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:19:29.222 [2024-12-09 05:14:20.583580] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:19:30.590 spdk_app_start Round 2 01:19:30.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:19:30.590 05:14:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:19:30.590 05:14:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:19:30.590 05:14:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58153 /var/tmp/spdk-nbd.sock 01:19:30.590 05:14:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58153 ']' 01:19:30.590 05:14:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:19:30.590 05:14:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:30.590 05:14:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:19:30.590 05:14:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:30.590 05:14:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:19:31.153 05:14:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:31.153 05:14:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:19:31.153 05:14:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:19:31.410 Malloc0 01:19:31.410 05:14:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:19:31.669 Malloc1 01:19:31.669 05:14:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:31.669 05:14:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:19:31.928 /dev/nbd0 01:19:31.928 05:14:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:19:31.928 05:14:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:19:31.928 1+0 records in 01:19:31.928 1+0 records out 01:19:31.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328459 s, 12.5 MB/s 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:19:31.928 05:14:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:19:31.928 05:14:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:19:31.928 05:14:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:31.928 05:14:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:19:32.187 /dev/nbd1 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:19:32.187 1+0 records in 01:19:32.187 1+0 records out 01:19:32.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031835 s, 12.9 MB/s 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:19:32.187 05:14:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:32.187 05:14:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:19:32.754 { 01:19:32.754 "nbd_device": "/dev/nbd0", 01:19:32.754 "bdev_name": "Malloc0" 01:19:32.754 }, 01:19:32.754 { 01:19:32.754 "nbd_device": "/dev/nbd1", 01:19:32.754 "bdev_name": "Malloc1" 01:19:32.754 } 01:19:32.754 ]' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:19:32.754 { 01:19:32.754 "nbd_device": "/dev/nbd0", 01:19:32.754 "bdev_name": "Malloc0" 01:19:32.754 }, 01:19:32.754 { 01:19:32.754 "nbd_device": "/dev/nbd1", 01:19:32.754 "bdev_name": "Malloc1" 01:19:32.754 } 01:19:32.754 ]' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:19:32.754 /dev/nbd1' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:19:32.754 /dev/nbd1' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:19:32.754 256+0 records in 01:19:32.754 256+0 records out 01:19:32.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627417 s, 167 MB/s 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:19:32.754 256+0 records in 01:19:32.754 256+0 records out 01:19:32.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264466 s, 39.6 MB/s 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:19:32.754 256+0 records in 01:19:32.754 256+0 records out 01:19:32.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321062 s, 32.7 MB/s 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:19:32.754 05:14:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:19:33.013 05:14:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:19:33.272 05:14:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:19:33.530 05:14:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:19:33.530 05:14:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:19:34.094 05:14:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:19:35.027 [2024-12-09 05:14:26.559507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:19:35.285 [2024-12-09 05:14:26.685307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:35.285 [2024-12-09 05:14:26.685329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:35.285 [2024-12-09 05:14:26.878586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:19:35.285 [2024-12-09 05:14:26.878731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:19:37.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:19:37.184 05:14:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58153 /var/tmp/spdk-nbd.sock 01:19:37.184 05:14:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58153 ']' 01:19:37.184 05:14:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:19:37.184 05:14:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:37.184 05:14:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:19:37.184 05:14:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:37.184 05:14:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:19:37.185 05:14:28 event.app_repeat -- event/event.sh@39 -- # killprocess 58153 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58153 ']' 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58153 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:37.185 05:14:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58153 01:19:37.443 killing process with pid 58153 01:19:37.443 05:14:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:37.444 05:14:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:37.444 05:14:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58153' 01:19:37.444 05:14:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58153 01:19:37.444 05:14:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58153 01:19:38.379 spdk_app_start is called in Round 0. 01:19:38.379 Shutdown signal received, stop current app iteration 01:19:38.379 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:19:38.379 spdk_app_start is called in Round 1. 01:19:38.379 Shutdown signal received, stop current app iteration 01:19:38.379 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:19:38.379 spdk_app_start is called in Round 2. 01:19:38.379 Shutdown signal received, stop current app iteration 01:19:38.379 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:19:38.379 spdk_app_start is called in Round 3. 01:19:38.379 Shutdown signal received, stop current app iteration 01:19:38.379 05:14:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:19:38.379 05:14:29 event.app_repeat -- event/event.sh@42 -- # return 0 01:19:38.379 01:19:38.379 real 0m21.423s 01:19:38.379 user 0m47.018s 01:19:38.379 sys 0m3.215s 01:19:38.379 05:14:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:38.379 ************************************ 01:19:38.379 END TEST app_repeat 01:19:38.379 ************************************ 01:19:38.379 05:14:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:19:38.379 05:14:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:19:38.379 05:14:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:19:38.379 05:14:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:38.379 05:14:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:38.379 05:14:29 event -- common/autotest_common.sh@10 -- # set +x 01:19:38.379 ************************************ 01:19:38.379 START TEST cpu_locks 01:19:38.379 ************************************ 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:19:38.379 * Looking for test storage... 01:19:38.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:38.379 05:14:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:38.379 --rc genhtml_branch_coverage=1 01:19:38.379 --rc genhtml_function_coverage=1 01:19:38.379 --rc genhtml_legend=1 01:19:38.379 --rc geninfo_all_blocks=1 01:19:38.379 --rc geninfo_unexecuted_blocks=1 01:19:38.379 01:19:38.379 ' 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:38.379 --rc genhtml_branch_coverage=1 01:19:38.379 --rc genhtml_function_coverage=1 01:19:38.379 --rc genhtml_legend=1 01:19:38.379 --rc geninfo_all_blocks=1 01:19:38.379 --rc geninfo_unexecuted_blocks=1 01:19:38.379 01:19:38.379 ' 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:38.379 --rc genhtml_branch_coverage=1 01:19:38.379 --rc genhtml_function_coverage=1 01:19:38.379 --rc genhtml_legend=1 01:19:38.379 --rc geninfo_all_blocks=1 01:19:38.379 --rc geninfo_unexecuted_blocks=1 01:19:38.379 01:19:38.379 ' 01:19:38.379 05:14:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:38.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:38.379 --rc genhtml_branch_coverage=1 01:19:38.379 --rc genhtml_function_coverage=1 01:19:38.379 --rc genhtml_legend=1 01:19:38.379 --rc geninfo_all_blocks=1 01:19:38.379 --rc geninfo_unexecuted_blocks=1 01:19:38.379 01:19:38.379 ' 01:19:38.379 05:14:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:19:38.379 05:14:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:19:38.380 05:14:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:19:38.380 05:14:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:19:38.380 05:14:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:38.380 05:14:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:38.380 05:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:19:38.380 ************************************ 01:19:38.380 START TEST default_locks 01:19:38.380 ************************************ 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58622 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58622 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58622 ']' 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:38.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:38.380 05:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:19:38.638 [2024-12-09 05:14:30.115888] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:38.638 [2024-12-09 05:14:30.116058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58622 ] 01:19:38.895 [2024-12-09 05:14:30.296644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:38.895 [2024-12-09 05:14:30.426856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:39.904 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:39.904 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 01:19:39.904 05:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58622 01:19:39.904 05:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:19:39.904 05:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58622 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58622 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58622 ']' 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58622 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58622 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:40.162 killing process with pid 58622 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58622' 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58622 01:19:40.162 05:14:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58622 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58622 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58622 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58622 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58622 ']' 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:19:42.693 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58622) - No such process 01:19:42.693 ERROR: process (pid: 58622) is no longer running 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:19:42.693 01:19:42.693 real 0m3.772s 01:19:42.693 user 0m3.686s 01:19:42.693 sys 0m0.823s 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:42.693 05:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:19:42.693 ************************************ 01:19:42.693 END TEST default_locks 01:19:42.693 ************************************ 01:19:42.693 05:14:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:19:42.693 05:14:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:42.693 05:14:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:42.693 05:14:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:19:42.693 ************************************ 01:19:42.693 START TEST default_locks_via_rpc 01:19:42.693 ************************************ 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58692 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58692 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58692 ']' 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:42.693 05:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:19:42.693 [2024-12-09 05:14:33.994022] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:42.693 [2024-12-09 05:14:33.994225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58692 ] 01:19:42.693 [2024-12-09 05:14:34.172187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:42.951 [2024-12-09 05:14:34.337900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58692 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:19:43.886 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58692 01:19:44.150 05:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58692 01:19:44.150 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58692 ']' 01:19:44.150 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58692 01:19:44.150 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 01:19:44.150 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:44.408 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58692 01:19:44.408 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:44.408 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:44.408 killing process with pid 58692 01:19:44.408 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58692' 01:19:44.408 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58692 01:19:44.408 05:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58692 01:19:46.942 01:19:46.942 real 0m4.250s 01:19:46.942 user 0m4.280s 01:19:46.942 sys 0m0.903s 01:19:46.942 05:14:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:46.942 05:14:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:19:46.942 ************************************ 01:19:46.942 END TEST default_locks_via_rpc 01:19:46.942 ************************************ 01:19:46.942 05:14:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:19:46.942 05:14:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:46.942 05:14:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:46.942 05:14:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:19:46.942 ************************************ 01:19:46.942 START TEST non_locking_app_on_locked_coremask 01:19:46.942 ************************************ 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58771 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58771 /var/tmp/spdk.sock 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58771 ']' 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:46.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:46.942 05:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:19:46.942 [2024-12-09 05:14:38.227583] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:46.942 [2024-12-09 05:14:38.228433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58771 ] 01:19:46.942 [2024-12-09 05:14:38.399075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:46.942 [2024-12-09 05:14:38.535298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58793 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58793 /var/tmp/spdk2.sock 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58793 ']' 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:47.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:47.879 05:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:19:48.137 [2024-12-09 05:14:39.531222] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:48.137 [2024-12-09 05:14:39.531427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 01:19:48.137 [2024-12-09 05:14:39.729389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:19:48.137 [2024-12-09 05:14:39.729453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:48.395 [2024-12-09 05:14:39.985365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:50.924 05:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:50.924 05:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:19:50.924 05:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58771 01:19:50.924 05:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58771 01:19:50.924 05:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:19:51.491 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58771 01:19:51.491 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58771 ']' 01:19:51.491 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58771 01:19:51.491 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:19:51.491 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:51.491 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58771 01:19:51.750 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:51.750 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:51.750 killing process with pid 58771 01:19:51.750 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58771' 01:19:51.750 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58771 01:19:51.750 05:14:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58771 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58793 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58793 ']' 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58793 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58793 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58793' 01:19:55.937 killing process with pid 58793 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58793 01:19:55.937 05:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58793 01:19:58.491 01:19:58.491 real 0m11.464s 01:19:58.491 user 0m11.825s 01:19:58.491 sys 0m1.678s 01:19:58.491 05:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:58.491 ************************************ 01:19:58.491 END TEST non_locking_app_on_locked_coremask 01:19:58.491 ************************************ 01:19:58.491 05:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:19:58.491 05:14:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:19:58.491 05:14:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:19:58.491 05:14:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:58.491 05:14:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:19:58.491 ************************************ 01:19:58.491 START TEST locking_app_on_unlocked_coremask 01:19:58.491 ************************************ 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58941 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58941 /var/tmp/spdk.sock 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58941 ']' 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:58.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:58.491 05:14:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:19:58.491 [2024-12-09 05:14:49.770649] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:58.491 [2024-12-09 05:14:49.770846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 01:19:58.491 [2024-12-09 05:14:49.955660] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:19:58.491 [2024-12-09 05:14:49.955727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:58.491 [2024-12-09 05:14:50.096443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58959 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58959 /var/tmp/spdk2.sock 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58959 ']' 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:59.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:59.424 05:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:19:59.682 [2024-12-09 05:14:51.139011] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:59.682 [2024-12-09 05:14:51.139274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58959 ] 01:19:59.941 [2024-12-09 05:14:51.328927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:00.200 [2024-12-09 05:14:51.597221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:02.731 05:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:02.731 05:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:20:02.731 05:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58959 01:20:02.731 05:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58959 01:20:02.731 05:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58941 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58941 ']' 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58941 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58941 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:03.297 killing process with pid 58941 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58941' 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58941 01:20:03.297 05:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58941 01:20:07.483 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58959 01:20:07.483 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58959 ']' 01:20:07.483 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58959 01:20:07.483 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:20:07.483 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:07.483 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58959 01:20:07.741 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:07.741 killing process with pid 58959 01:20:07.741 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:07.741 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58959' 01:20:07.741 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58959 01:20:07.741 05:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58959 01:20:10.271 01:20:10.272 real 0m11.732s 01:20:10.272 user 0m12.103s 01:20:10.272 sys 0m1.756s 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:10.272 ************************************ 01:20:10.272 END TEST locking_app_on_unlocked_coremask 01:20:10.272 ************************************ 01:20:10.272 05:15:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:20:10.272 05:15:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:10.272 05:15:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:10.272 05:15:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:20:10.272 ************************************ 01:20:10.272 START TEST locking_app_on_locked_coremask 01:20:10.272 ************************************ 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59110 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59110 /var/tmp/spdk.sock 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59110 ']' 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:10.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:10.272 05:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:10.272 [2024-12-09 05:15:01.519521] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:10.272 [2024-12-09 05:15:01.519677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59110 ] 01:20:10.272 [2024-12-09 05:15:01.689943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:10.272 [2024-12-09 05:15:01.826821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59126 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59126 /var/tmp/spdk2.sock 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59126 /var/tmp/spdk2.sock 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:20:11.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59126 /var/tmp/spdk2.sock 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59126 ']' 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:11.253 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:20:11.254 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:11.254 05:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:11.512 [2024-12-09 05:15:02.955384] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:11.512 [2024-12-09 05:15:02.956588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 01:20:11.771 [2024-12-09 05:15:03.173479] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59110 has claimed it. 01:20:11.771 [2024-12-09 05:15:03.173599] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:20:12.029 ERROR: process (pid: 59126) is no longer running 01:20:12.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59126) - No such process 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59110 01:20:12.029 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59110 01:20:12.030 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:20:12.597 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59110 01:20:12.597 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59110 ']' 01:20:12.597 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59110 01:20:12.597 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:20:12.597 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:12.597 05:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59110 01:20:12.597 05:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:12.597 05:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:12.597 killing process with pid 59110 01:20:12.597 05:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59110' 01:20:12.597 05:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59110 01:20:12.597 05:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59110 01:20:15.127 01:20:15.127 real 0m4.758s 01:20:15.127 user 0m4.920s 01:20:15.127 sys 0m1.099s 01:20:15.127 05:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:15.127 05:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:15.127 ************************************ 01:20:15.127 END TEST locking_app_on_locked_coremask 01:20:15.127 ************************************ 01:20:15.127 05:15:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:20:15.127 05:15:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:15.127 05:15:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:15.127 05:15:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:20:15.127 ************************************ 01:20:15.127 START TEST locking_overlapped_coremask 01:20:15.127 ************************************ 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59196 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59196 /var/tmp/spdk.sock 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59196 ']' 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:15.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:15.128 05:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:15.128 [2024-12-09 05:15:06.330923] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:15.128 [2024-12-09 05:15:06.331078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59196 ] 01:20:15.128 [2024-12-09 05:15:06.498517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:20:15.128 [2024-12-09 05:15:06.624510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:15.128 [2024-12-09 05:15:06.624635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:15.128 [2024-12-09 05:15:06.624650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59219 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59219 /var/tmp/spdk2.sock 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59219 /var/tmp/spdk2.sock 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59219 /var/tmp/spdk2.sock 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59219 ']' 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:20:16.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:16.063 05:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:16.063 [2024-12-09 05:15:07.605002] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:16.063 [2024-12-09 05:15:07.605888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59219 ] 01:20:16.321 [2024-12-09 05:15:07.809262] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59196 has claimed it. 01:20:16.321 [2024-12-09 05:15:07.809333] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:20:16.888 ERROR: process (pid: 59219) is no longer running 01:20:16.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59219) - No such process 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59196 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59196 ']' 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59196 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59196 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:16.888 killing process with pid 59196 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59196' 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59196 01:20:16.888 05:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59196 01:20:19.418 01:20:19.418 real 0m4.290s 01:20:19.418 user 0m11.485s 01:20:19.418 sys 0m0.822s 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:20:19.418 ************************************ 01:20:19.418 END TEST locking_overlapped_coremask 01:20:19.418 ************************************ 01:20:19.418 05:15:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:20:19.418 05:15:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:19.418 05:15:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:19.418 05:15:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:20:19.418 ************************************ 01:20:19.418 START TEST locking_overlapped_coremask_via_rpc 01:20:19.418 ************************************ 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59278 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59278 /var/tmp/spdk.sock 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59278 ']' 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:19.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:19.418 05:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:19.418 [2024-12-09 05:15:10.705583] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:19.418 [2024-12-09 05:15:10.706473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59278 ] 01:20:19.418 [2024-12-09 05:15:10.885168] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:20:19.418 [2024-12-09 05:15:10.885224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:20:19.418 [2024-12-09 05:15:11.003460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:19.418 [2024-12-09 05:15:11.003622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:19.418 [2024-12-09 05:15:11.003645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59301 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59301 /var/tmp/spdk2.sock 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:20.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:20.353 05:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:20.612 [2024-12-09 05:15:12.057259] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:20.612 [2024-12-09 05:15:12.057468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 01:20:20.870 [2024-12-09 05:15:12.259722] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:20:20.870 [2024-12-09 05:15:12.259828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:20:21.128 [2024-12-09 05:15:12.531706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:20:21.128 [2024-12-09 05:15:12.535455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:21.128 [2024-12-09 05:15:12.535461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:23.659 [2024-12-09 05:15:14.796630] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59278 has claimed it. 01:20:23.659 request: 01:20:23.659 { 01:20:23.659 "method": "framework_enable_cpumask_locks", 01:20:23.659 "req_id": 1 01:20:23.659 } 01:20:23.659 Got JSON-RPC error response 01:20:23.659 response: 01:20:23.659 { 01:20:23.659 "code": -32603, 01:20:23.659 "message": "Failed to claim CPU core: 2" 01:20:23.659 } 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59278 /var/tmp/spdk.sock 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59278 ']' 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:23.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:23.659 05:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59301 /var/tmp/spdk2.sock 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:23.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:20:23.659 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:20:23.660 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:23.660 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:20:23.918 01:20:23.918 real 0m4.799s 01:20:23.918 user 0m1.667s 01:20:23.918 sys 0m0.232s 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:23.918 05:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:20:23.918 ************************************ 01:20:23.918 END TEST locking_overlapped_coremask_via_rpc 01:20:23.918 ************************************ 01:20:23.918 05:15:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:20:23.918 05:15:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59278 ]] 01:20:23.918 05:15:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59278 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59278 ']' 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59278 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59278 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:23.918 killing process with pid 59278 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59278' 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59278 01:20:23.918 05:15:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59278 01:20:26.493 05:15:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59301 ]] 01:20:26.493 05:15:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59301 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59301 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59301 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:20:26.493 killing process with pid 59301 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59301' 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59301 01:20:26.493 05:15:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59301 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59278 ]] 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59278 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59278 ']' 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59278 01:20:29.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59278) - No such process 01:20:29.014 Process with pid 59278 is not found 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59278 is not found' 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59301 ]] 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59301 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 01:20:29.014 Process with pid 59301 is not found 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59301 01:20:29.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59301) - No such process 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59301 is not found' 01:20:29.014 05:15:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:20:29.014 ************************************ 01:20:29.014 END TEST cpu_locks 01:20:29.014 ************************************ 01:20:29.014 01:20:29.014 real 0m50.444s 01:20:29.014 user 1m26.720s 01:20:29.014 sys 0m8.774s 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:29.014 05:15:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:20:29.014 01:20:29.014 real 1m20.909s 01:20:29.014 user 2m26.694s 01:20:29.014 sys 0m13.209s 01:20:29.014 05:15:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:29.014 05:15:20 event -- common/autotest_common.sh@10 -- # set +x 01:20:29.014 ************************************ 01:20:29.014 END TEST event 01:20:29.014 ************************************ 01:20:29.014 05:15:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:20:29.014 05:15:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:29.014 05:15:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:29.014 05:15:20 -- common/autotest_common.sh@10 -- # set +x 01:20:29.014 ************************************ 01:20:29.014 START TEST thread 01:20:29.014 ************************************ 01:20:29.014 05:15:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:20:29.014 * Looking for test storage... 01:20:29.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:20:29.014 05:15:20 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:29.014 05:15:20 thread -- common/autotest_common.sh@1693 -- # lcov --version 01:20:29.014 05:15:20 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:29.014 05:15:20 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:29.014 05:15:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:29.014 05:15:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:29.015 05:15:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:29.015 05:15:20 thread -- scripts/common.sh@336 -- # IFS=.-: 01:20:29.015 05:15:20 thread -- scripts/common.sh@336 -- # read -ra ver1 01:20:29.015 05:15:20 thread -- scripts/common.sh@337 -- # IFS=.-: 01:20:29.015 05:15:20 thread -- scripts/common.sh@337 -- # read -ra ver2 01:20:29.015 05:15:20 thread -- scripts/common.sh@338 -- # local 'op=<' 01:20:29.015 05:15:20 thread -- scripts/common.sh@340 -- # ver1_l=2 01:20:29.015 05:15:20 thread -- scripts/common.sh@341 -- # ver2_l=1 01:20:29.015 05:15:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:29.015 05:15:20 thread -- scripts/common.sh@344 -- # case "$op" in 01:20:29.015 05:15:20 thread -- scripts/common.sh@345 -- # : 1 01:20:29.015 05:15:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:29.015 05:15:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:29.015 05:15:20 thread -- scripts/common.sh@365 -- # decimal 1 01:20:29.015 05:15:20 thread -- scripts/common.sh@353 -- # local d=1 01:20:29.015 05:15:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:29.015 05:15:20 thread -- scripts/common.sh@355 -- # echo 1 01:20:29.015 05:15:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 01:20:29.015 05:15:20 thread -- scripts/common.sh@366 -- # decimal 2 01:20:29.015 05:15:20 thread -- scripts/common.sh@353 -- # local d=2 01:20:29.015 05:15:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:29.015 05:15:20 thread -- scripts/common.sh@355 -- # echo 2 01:20:29.015 05:15:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 01:20:29.015 05:15:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:29.015 05:15:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:29.015 05:15:20 thread -- scripts/common.sh@368 -- # return 0 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:29.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:29.015 --rc genhtml_branch_coverage=1 01:20:29.015 --rc genhtml_function_coverage=1 01:20:29.015 --rc genhtml_legend=1 01:20:29.015 --rc geninfo_all_blocks=1 01:20:29.015 --rc geninfo_unexecuted_blocks=1 01:20:29.015 01:20:29.015 ' 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:29.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:29.015 --rc genhtml_branch_coverage=1 01:20:29.015 --rc genhtml_function_coverage=1 01:20:29.015 --rc genhtml_legend=1 01:20:29.015 --rc geninfo_all_blocks=1 01:20:29.015 --rc geninfo_unexecuted_blocks=1 01:20:29.015 01:20:29.015 ' 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:29.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:29.015 --rc genhtml_branch_coverage=1 01:20:29.015 --rc genhtml_function_coverage=1 01:20:29.015 --rc genhtml_legend=1 01:20:29.015 --rc geninfo_all_blocks=1 01:20:29.015 --rc geninfo_unexecuted_blocks=1 01:20:29.015 01:20:29.015 ' 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:29.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:29.015 --rc genhtml_branch_coverage=1 01:20:29.015 --rc genhtml_function_coverage=1 01:20:29.015 --rc genhtml_legend=1 01:20:29.015 --rc geninfo_all_blocks=1 01:20:29.015 --rc geninfo_unexecuted_blocks=1 01:20:29.015 01:20:29.015 ' 01:20:29.015 05:15:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:29.015 05:15:20 thread -- common/autotest_common.sh@10 -- # set +x 01:20:29.015 ************************************ 01:20:29.015 START TEST thread_poller_perf 01:20:29.015 ************************************ 01:20:29.015 05:15:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:20:29.015 [2024-12-09 05:15:20.577932] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:29.015 [2024-12-09 05:15:20.578132] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 01:20:29.273 [2024-12-09 05:15:20.765757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:29.531 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:20:29.531 [2024-12-09 05:15:20.936497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:30.906 [2024-12-09T05:15:22.523Z] ====================================== 01:20:30.906 [2024-12-09T05:15:22.523Z] busy:2213563904 (cyc) 01:20:30.906 [2024-12-09T05:15:22.523Z] total_run_count: 350000 01:20:30.906 [2024-12-09T05:15:22.523Z] tsc_hz: 2200000000 (cyc) 01:20:30.906 [2024-12-09T05:15:22.523Z] ====================================== 01:20:30.906 [2024-12-09T05:15:22.523Z] poller_cost: 6324 (cyc), 2874 (nsec) 01:20:30.906 01:20:30.906 real 0m1.718s 01:20:30.906 user 0m1.507s 01:20:30.906 sys 0m0.100s 01:20:30.906 05:15:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:30.906 ************************************ 01:20:30.906 END TEST thread_poller_perf 01:20:30.906 05:15:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:20:30.906 ************************************ 01:20:30.906 05:15:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:20:30.906 05:15:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:20:30.906 05:15:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:30.906 05:15:22 thread -- common/autotest_common.sh@10 -- # set +x 01:20:30.906 ************************************ 01:20:30.906 START TEST thread_poller_perf 01:20:30.906 ************************************ 01:20:30.906 05:15:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:20:30.906 [2024-12-09 05:15:22.346978] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:30.906 [2024-12-09 05:15:22.347160] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59544 ] 01:20:31.164 [2024-12-09 05:15:22.531153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:31.164 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:20:31.164 [2024-12-09 05:15:22.656135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:32.540 [2024-12-09T05:15:24.157Z] ====================================== 01:20:32.540 [2024-12-09T05:15:24.157Z] busy:2204217630 (cyc) 01:20:32.540 [2024-12-09T05:15:24.157Z] total_run_count: 4519000 01:20:32.540 [2024-12-09T05:15:24.157Z] tsc_hz: 2200000000 (cyc) 01:20:32.540 [2024-12-09T05:15:24.157Z] ====================================== 01:20:32.540 [2024-12-09T05:15:24.157Z] poller_cost: 487 (cyc), 221 (nsec) 01:20:32.540 01:20:32.540 real 0m1.647s 01:20:32.540 user 0m1.420s 01:20:32.540 sys 0m0.118s 01:20:32.540 05:15:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:32.540 05:15:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:20:32.540 ************************************ 01:20:32.540 END TEST thread_poller_perf 01:20:32.540 ************************************ 01:20:32.540 05:15:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:20:32.540 01:20:32.540 real 0m3.666s 01:20:32.540 user 0m3.087s 01:20:32.540 sys 0m0.356s 01:20:32.540 05:15:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:32.540 05:15:23 thread -- common/autotest_common.sh@10 -- # set +x 01:20:32.540 ************************************ 01:20:32.540 END TEST thread 01:20:32.540 ************************************ 01:20:32.540 05:15:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 01:20:32.540 05:15:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:20:32.540 05:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:32.540 05:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:32.540 05:15:24 -- common/autotest_common.sh@10 -- # set +x 01:20:32.540 ************************************ 01:20:32.540 START TEST app_cmdline 01:20:32.540 ************************************ 01:20:32.540 05:15:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:20:32.540 * Looking for test storage... 01:20:32.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:20:32.540 05:15:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:32.540 05:15:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 01:20:32.540 05:15:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@345 -- # : 1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:32.802 05:15:24 app_cmdline -- scripts/common.sh@368 -- # return 0 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:32.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.802 --rc genhtml_branch_coverage=1 01:20:32.802 --rc genhtml_function_coverage=1 01:20:32.802 --rc genhtml_legend=1 01:20:32.802 --rc geninfo_all_blocks=1 01:20:32.802 --rc geninfo_unexecuted_blocks=1 01:20:32.802 01:20:32.802 ' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:32.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.802 --rc genhtml_branch_coverage=1 01:20:32.802 --rc genhtml_function_coverage=1 01:20:32.802 --rc genhtml_legend=1 01:20:32.802 --rc geninfo_all_blocks=1 01:20:32.802 --rc geninfo_unexecuted_blocks=1 01:20:32.802 01:20:32.802 ' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:32.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.802 --rc genhtml_branch_coverage=1 01:20:32.802 --rc genhtml_function_coverage=1 01:20:32.802 --rc genhtml_legend=1 01:20:32.802 --rc geninfo_all_blocks=1 01:20:32.802 --rc geninfo_unexecuted_blocks=1 01:20:32.802 01:20:32.802 ' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:32.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.802 --rc genhtml_branch_coverage=1 01:20:32.802 --rc genhtml_function_coverage=1 01:20:32.802 --rc genhtml_legend=1 01:20:32.802 --rc geninfo_all_blocks=1 01:20:32.802 --rc geninfo_unexecuted_blocks=1 01:20:32.802 01:20:32.802 ' 01:20:32.802 05:15:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:20:32.802 05:15:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59626 01:20:32.802 05:15:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59626 01:20:32.802 05:15:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59626 ']' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:32.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:32.802 05:15:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:20:32.802 [2024-12-09 05:15:24.370928] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:32.802 [2024-12-09 05:15:24.371145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 01:20:33.061 [2024-12-09 05:15:24.556000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:33.320 [2024-12-09 05:15:24.684310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:34.256 05:15:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:34.256 05:15:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 01:20:34.256 05:15:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:20:34.256 { 01:20:34.256 "version": "SPDK v25.01-pre git sha1 66902d69a", 01:20:34.256 "fields": { 01:20:34.256 "major": 25, 01:20:34.256 "minor": 1, 01:20:34.256 "patch": 0, 01:20:34.256 "suffix": "-pre", 01:20:34.256 "commit": "66902d69a" 01:20:34.256 } 01:20:34.256 } 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@26 -- # sort 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:20:34.514 05:15:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:20:34.514 05:15:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:20:34.773 request: 01:20:34.773 { 01:20:34.773 "method": "env_dpdk_get_mem_stats", 01:20:34.773 "req_id": 1 01:20:34.773 } 01:20:34.773 Got JSON-RPC error response 01:20:34.773 response: 01:20:34.773 { 01:20:34.773 "code": -32601, 01:20:34.773 "message": "Method not found" 01:20:34.773 } 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:20:34.773 05:15:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59626 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59626 ']' 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59626 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59626 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:34.773 killing process with pid 59626 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59626' 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 59626 01:20:34.773 05:15:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 59626 01:20:37.301 01:20:37.301 real 0m4.415s 01:20:37.301 user 0m4.759s 01:20:37.301 sys 0m0.765s 01:20:37.301 05:15:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:37.301 ************************************ 01:20:37.301 END TEST app_cmdline 01:20:37.301 ************************************ 01:20:37.301 05:15:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:20:37.301 05:15:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:20:37.301 05:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:37.301 05:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:37.301 05:15:28 -- common/autotest_common.sh@10 -- # set +x 01:20:37.301 ************************************ 01:20:37.301 START TEST version 01:20:37.301 ************************************ 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:20:37.301 * Looking for test storage... 01:20:37.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1693 -- # lcov --version 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:37.301 05:15:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:37.301 05:15:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:37.301 05:15:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:37.301 05:15:28 version -- scripts/common.sh@336 -- # IFS=.-: 01:20:37.301 05:15:28 version -- scripts/common.sh@336 -- # read -ra ver1 01:20:37.301 05:15:28 version -- scripts/common.sh@337 -- # IFS=.-: 01:20:37.301 05:15:28 version -- scripts/common.sh@337 -- # read -ra ver2 01:20:37.301 05:15:28 version -- scripts/common.sh@338 -- # local 'op=<' 01:20:37.301 05:15:28 version -- scripts/common.sh@340 -- # ver1_l=2 01:20:37.301 05:15:28 version -- scripts/common.sh@341 -- # ver2_l=1 01:20:37.301 05:15:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:37.301 05:15:28 version -- scripts/common.sh@344 -- # case "$op" in 01:20:37.301 05:15:28 version -- scripts/common.sh@345 -- # : 1 01:20:37.301 05:15:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:37.301 05:15:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:37.301 05:15:28 version -- scripts/common.sh@365 -- # decimal 1 01:20:37.301 05:15:28 version -- scripts/common.sh@353 -- # local d=1 01:20:37.301 05:15:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:37.301 05:15:28 version -- scripts/common.sh@355 -- # echo 1 01:20:37.301 05:15:28 version -- scripts/common.sh@365 -- # ver1[v]=1 01:20:37.301 05:15:28 version -- scripts/common.sh@366 -- # decimal 2 01:20:37.301 05:15:28 version -- scripts/common.sh@353 -- # local d=2 01:20:37.301 05:15:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:37.301 05:15:28 version -- scripts/common.sh@355 -- # echo 2 01:20:37.301 05:15:28 version -- scripts/common.sh@366 -- # ver2[v]=2 01:20:37.301 05:15:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:37.301 05:15:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:37.301 05:15:28 version -- scripts/common.sh@368 -- # return 0 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.301 --rc genhtml_branch_coverage=1 01:20:37.301 --rc genhtml_function_coverage=1 01:20:37.301 --rc genhtml_legend=1 01:20:37.301 --rc geninfo_all_blocks=1 01:20:37.301 --rc geninfo_unexecuted_blocks=1 01:20:37.301 01:20:37.301 ' 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.301 --rc genhtml_branch_coverage=1 01:20:37.301 --rc genhtml_function_coverage=1 01:20:37.301 --rc genhtml_legend=1 01:20:37.301 --rc geninfo_all_blocks=1 01:20:37.301 --rc geninfo_unexecuted_blocks=1 01:20:37.301 01:20:37.301 ' 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.301 --rc genhtml_branch_coverage=1 01:20:37.301 --rc genhtml_function_coverage=1 01:20:37.301 --rc genhtml_legend=1 01:20:37.301 --rc geninfo_all_blocks=1 01:20:37.301 --rc geninfo_unexecuted_blocks=1 01:20:37.301 01:20:37.301 ' 01:20:37.301 05:15:28 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:37.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.301 --rc genhtml_branch_coverage=1 01:20:37.301 --rc genhtml_function_coverage=1 01:20:37.301 --rc genhtml_legend=1 01:20:37.301 --rc geninfo_all_blocks=1 01:20:37.301 --rc geninfo_unexecuted_blocks=1 01:20:37.301 01:20:37.301 ' 01:20:37.301 05:15:28 version -- app/version.sh@17 -- # get_header_version major 01:20:37.302 05:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # cut -f2 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # tr -d '"' 01:20:37.302 05:15:28 version -- app/version.sh@17 -- # major=25 01:20:37.302 05:15:28 version -- app/version.sh@18 -- # get_header_version minor 01:20:37.302 05:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # cut -f2 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # tr -d '"' 01:20:37.302 05:15:28 version -- app/version.sh@18 -- # minor=1 01:20:37.302 05:15:28 version -- app/version.sh@19 -- # get_header_version patch 01:20:37.302 05:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # cut -f2 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # tr -d '"' 01:20:37.302 05:15:28 version -- app/version.sh@19 -- # patch=0 01:20:37.302 05:15:28 version -- app/version.sh@20 -- # get_header_version suffix 01:20:37.302 05:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # cut -f2 01:20:37.302 05:15:28 version -- app/version.sh@14 -- # tr -d '"' 01:20:37.302 05:15:28 version -- app/version.sh@20 -- # suffix=-pre 01:20:37.302 05:15:28 version -- app/version.sh@22 -- # version=25.1 01:20:37.302 05:15:28 version -- app/version.sh@25 -- # (( patch != 0 )) 01:20:37.302 05:15:28 version -- app/version.sh@28 -- # version=25.1rc0 01:20:37.302 05:15:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:20:37.302 05:15:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:20:37.302 05:15:28 version -- app/version.sh@30 -- # py_version=25.1rc0 01:20:37.302 05:15:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 01:20:37.302 01:20:37.302 real 0m0.250s 01:20:37.302 user 0m0.156s 01:20:37.302 sys 0m0.133s 01:20:37.302 05:15:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:37.302 05:15:28 version -- common/autotest_common.sh@10 -- # set +x 01:20:37.302 ************************************ 01:20:37.302 END TEST version 01:20:37.302 ************************************ 01:20:37.302 05:15:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 01:20:37.302 05:15:28 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 01:20:37.302 05:15:28 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 01:20:37.302 05:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:37.302 05:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:37.302 05:15:28 -- common/autotest_common.sh@10 -- # set +x 01:20:37.302 ************************************ 01:20:37.302 START TEST bdev_raid 01:20:37.302 ************************************ 01:20:37.302 05:15:28 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 01:20:37.302 * Looking for test storage... 01:20:37.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:20:37.302 05:15:28 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:37.302 05:15:28 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 01:20:37.302 05:15:28 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:37.559 05:15:28 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 01:20:37.559 05:15:28 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@345 -- # : 1 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@365 -- # decimal 1 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@353 -- # local d=1 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@355 -- # echo 1 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 01:20:37.560 05:15:28 bdev_raid -- scripts/common.sh@366 -- # decimal 2 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@353 -- # local d=2 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@355 -- # echo 2 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:37.560 05:15:29 bdev_raid -- scripts/common.sh@368 -- # return 0 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.560 --rc genhtml_branch_coverage=1 01:20:37.560 --rc genhtml_function_coverage=1 01:20:37.560 --rc genhtml_legend=1 01:20:37.560 --rc geninfo_all_blocks=1 01:20:37.560 --rc geninfo_unexecuted_blocks=1 01:20:37.560 01:20:37.560 ' 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.560 --rc genhtml_branch_coverage=1 01:20:37.560 --rc genhtml_function_coverage=1 01:20:37.560 --rc genhtml_legend=1 01:20:37.560 --rc geninfo_all_blocks=1 01:20:37.560 --rc geninfo_unexecuted_blocks=1 01:20:37.560 01:20:37.560 ' 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.560 --rc genhtml_branch_coverage=1 01:20:37.560 --rc genhtml_function_coverage=1 01:20:37.560 --rc genhtml_legend=1 01:20:37.560 --rc geninfo_all_blocks=1 01:20:37.560 --rc geninfo_unexecuted_blocks=1 01:20:37.560 01:20:37.560 ' 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:37.560 --rc genhtml_branch_coverage=1 01:20:37.560 --rc genhtml_function_coverage=1 01:20:37.560 --rc genhtml_legend=1 01:20:37.560 --rc geninfo_all_blocks=1 01:20:37.560 --rc geninfo_unexecuted_blocks=1 01:20:37.560 01:20:37.560 ' 01:20:37.560 05:15:29 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:20:37.560 05:15:29 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 01:20:37.560 05:15:29 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 01:20:37.560 05:15:29 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 01:20:37.560 05:15:29 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 01:20:37.560 05:15:29 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 01:20:37.560 05:15:29 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:37.560 05:15:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:20:37.560 ************************************ 01:20:37.560 START TEST raid1_resize_data_offset_test 01:20:37.560 ************************************ 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59815 01:20:37.560 Process raid pid: 59815 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59815' 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59815 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59815 ']' 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:37.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:37.560 05:15:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:20:37.560 [2024-12-09 05:15:29.119834] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:37.560 [2024-12-09 05:15:29.120027] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:37.817 [2024-12-09 05:15:29.294585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:37.817 [2024-12-09 05:15:29.418558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:38.075 [2024-12-09 05:15:29.622763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:38.075 [2024-12-09 05:15:29.622812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:38.675 malloc0 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:38.675 malloc1 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.675 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:38.932 null0 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:38.932 [2024-12-09 05:15:30.303946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 01:20:38.932 [2024-12-09 05:15:30.306266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:20:38.932 [2024-12-09 05:15:30.306331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 01:20:38.932 [2024-12-09 05:15:30.306543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:20:38.932 [2024-12-09 05:15:30.306563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 01:20:38.932 [2024-12-09 05:15:30.306844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 01:20:38.932 [2024-12-09 05:15:30.307041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:20:38.932 [2024-12-09 05:15:30.307067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 01:20:38.932 [2024-12-09 05:15:30.307242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:38.932 [2024-12-09 05:15:30.367918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.932 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:39.498 malloc2 01:20:39.498 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:39.498 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 01:20:39.498 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:39.498 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:39.499 [2024-12-09 05:15:30.899428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:20:39.499 [2024-12-09 05:15:30.914401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:39.499 [2024-12-09 05:15:30.916669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59815 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59815 ']' 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59815 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:39.499 05:15:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59815 01:20:39.499 killing process with pid 59815 01:20:39.499 05:15:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:39.499 05:15:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:39.499 05:15:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59815' 01:20:39.499 05:15:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59815 01:20:39.499 [2024-12-09 05:15:31.004112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:20:39.499 05:15:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59815 01:20:39.499 [2024-12-09 05:15:31.004917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 01:20:39.499 [2024-12-09 05:15:31.005310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:39.499 [2024-12-09 05:15:31.005348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 01:20:39.499 [2024-12-09 05:15:31.030917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:39.499 [2024-12-09 05:15:31.031500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:39.499 [2024-12-09 05:15:31.031685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 01:20:41.399 [2024-12-09 05:15:32.580989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:20:42.335 05:15:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 01:20:42.335 01:20:42.335 real 0m4.610s 01:20:42.335 user 0m4.483s 01:20:42.335 sys 0m0.731s 01:20:42.335 05:15:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:42.335 05:15:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 01:20:42.335 ************************************ 01:20:42.335 END TEST raid1_resize_data_offset_test 01:20:42.335 ************************************ 01:20:42.335 05:15:33 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 01:20:42.335 05:15:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:42.335 05:15:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:42.335 05:15:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:20:42.335 ************************************ 01:20:42.335 START TEST raid0_resize_superblock_test 01:20:42.335 ************************************ 01:20:42.335 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 01:20:42.335 05:15:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 01:20:42.335 05:15:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59899 01:20:42.335 Process raid pid: 59899 01:20:42.335 05:15:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59899' 01:20:42.335 05:15:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59899 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59899 ']' 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:42.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:42.336 05:15:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:42.336 [2024-12-09 05:15:33.778336] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:42.336 [2024-12-09 05:15:33.778503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:42.336 [2024-12-09 05:15:33.943820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:42.594 [2024-12-09 05:15:34.059283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:42.851 [2024-12-09 05:15:34.260933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:42.851 [2024-12-09 05:15:34.261004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:43.419 05:15:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:43.419 05:15:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:20:43.419 05:15:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 01:20:43.419 05:15:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.419 05:15:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.676 malloc0 01:20:43.676 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.676 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 01:20:43.676 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.676 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.676 [2024-12-09 05:15:35.289792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 01:20:43.935 [2024-12-09 05:15:35.290908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:20:43.935 [2024-12-09 05:15:35.291015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:20:43.935 [2024-12-09 05:15:35.291097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:20:43.935 [2024-12-09 05:15:35.294224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:20:43.935 [2024-12-09 05:15:35.294533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 01:20:43.935 pt0 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.935 1e1dc9a5-8b68-4ded-9c63-34c3a2b9d174 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.935 25487678-8530-4ed7-8008-39749a379470 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.935 07b6a8d3-2a2d-40f2-8e5b-4db63da6a278 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.935 [2024-12-09 05:15:35.488424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 25487678-8530-4ed7-8008-39749a379470 is claimed 01:20:43.935 [2024-12-09 05:15:35.488852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 07b6a8d3-2a2d-40f2-8e5b-4db63da6a278 is claimed 01:20:43.935 [2024-12-09 05:15:35.489026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:20:43.935 [2024-12-09 05:15:35.489049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 01:20:43.935 [2024-12-09 05:15:35.489359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:20:43.935 [2024-12-09 05:15:35.489630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:20:43.935 [2024-12-09 05:15:35.489645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 01:20:43.935 [2024-12-09 05:15:35.489803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.935 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.193 [2024-12-09 05:15:35.608627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.193 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.193 [2024-12-09 05:15:35.652592] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:20:44.193 [2024-12-09 05:15:35.652618] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '25487678-8530-4ed7-8008-39749a379470' was resized: old size 131072, new size 204800 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.194 [2024-12-09 05:15:35.660539] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:20:44.194 [2024-12-09 05:15:35.660565] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '07b6a8d3-2a2d-40f2-8e5b-4db63da6a278' was resized: old size 131072, new size 204800 01:20:44.194 [2024-12-09 05:15:35.660774] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.194 [2024-12-09 05:15:35.776648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:20:44.194 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.452 [2024-12-09 05:15:35.832483] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 01:20:44.452 [2024-12-09 05:15:35.832763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 01:20:44.452 [2024-12-09 05:15:35.832795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:20:44.452 [2024-12-09 05:15:35.832814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 01:20:44.452 [2024-12-09 05:15:35.832949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:44.452 [2024-12-09 05:15:35.832994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:44.452 [2024-12-09 05:15:35.833012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.452 [2024-12-09 05:15:35.840403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 01:20:44.452 [2024-12-09 05:15:35.840637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:20:44.452 [2024-12-09 05:15:35.840756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 01:20:44.452 [2024-12-09 05:15:35.840843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:20:44.452 [2024-12-09 05:15:35.843668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:20:44.452 [2024-12-09 05:15:35.843772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 01:20:44.452 pt0 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.452 [2024-12-09 05:15:35.845945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 25487678-8530-4ed7-8008-39749a379470 01:20:44.452 [2024-12-09 05:15:35.846008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 25487678-8530-4ed7-8008-39749a379470 is claimed 01:20:44.452 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.452 [2024-12-09 05:15:35.846115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 07b6a8d3-2a2d-40f2-8e5b-4db63da6a278 01:20:44.452 [2024-12-09 05:15:35.846145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 07b6a8d3-2a2d-40f2-8e5b-4db63da6a278 is claimed 01:20:44.452 [2024-12-09 05:15:35.846274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 07b6a8d3-2a2d-40f2-8e5b-4db63da6a278 (2) smaller than existing raid bdev Raid (3) 01:20:44.452 [2024-12-09 05:15:35.846308] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 25487678-8530-4ed7-8008-39749a379470: File exists 01:20:44.452 [2024-12-09 05:15:35.846463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:20:44.452 [2024-12-09 05:15:35.846485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 01:20:44.453 [2024-12-09 05:15:35.846770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:20:44.453 [2024-12-09 05:15:35.846937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:20:44.453 [2024-12-09 05:15:35.846949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 01:20:44.453 [2024-12-09 05:15:35.847097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:44.453 [2024-12-09 05:15:35.860672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59899 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59899 ']' 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59899 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59899 01:20:44.453 killing process with pid 59899 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59899' 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59899 01:20:44.453 [2024-12-09 05:15:35.948543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:20:44.453 [2024-12-09 05:15:35.948597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:44.453 [2024-12-09 05:15:35.948639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:44.453 05:15:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59899 01:20:44.453 [2024-12-09 05:15:35.948651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 01:20:45.867 [2024-12-09 05:15:37.176744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:20:46.801 05:15:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 01:20:46.801 01:20:46.801 real 0m4.540s 01:20:46.801 user 0m4.774s 01:20:46.801 sys 0m0.716s 01:20:46.801 05:15:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:46.801 ************************************ 01:20:46.801 END TEST raid0_resize_superblock_test 01:20:46.801 ************************************ 01:20:46.801 05:15:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:46.801 05:15:38 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 01:20:46.801 05:15:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:46.801 05:15:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:46.801 05:15:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:20:46.801 ************************************ 01:20:46.801 START TEST raid1_resize_superblock_test 01:20:46.801 ************************************ 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 01:20:46.801 Process raid pid: 60000 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60000 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60000' 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60000 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60000 ']' 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:46.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:46.801 05:15:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:46.801 [2024-12-09 05:15:38.369648] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:46.801 [2024-12-09 05:15:38.369807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:47.060 [2024-12-09 05:15:38.536288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:47.060 [2024-12-09 05:15:38.662958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:47.318 [2024-12-09 05:15:38.869937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:47.318 [2024-12-09 05:15:38.870271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:47.885 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:47.885 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:20:47.886 05:15:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 01:20:47.886 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:47.886 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.454 malloc0 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.454 [2024-12-09 05:15:39.982483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 01:20:48.454 [2024-12-09 05:15:39.982573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:20:48.454 [2024-12-09 05:15:39.982605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:20:48.454 [2024-12-09 05:15:39.982623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:20:48.454 [2024-12-09 05:15:39.985338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:20:48.454 [2024-12-09 05:15:39.985393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 01:20:48.454 pt0 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.454 05:15:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 c7ba91dc-64ff-4863-a0ac-3c454f966ee5 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 53d40391-b64a-4948-8e14-71d175de9830 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 ef5ff003-0323-46ab-b91b-25d3b29fbfb4 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 [2024-12-09 05:15:40.184281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 53d40391-b64a-4948-8e14-71d175de9830 is claimed 01:20:48.713 [2024-12-09 05:15:40.184435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ef5ff003-0323-46ab-b91b-25d3b29fbfb4 is claimed 01:20:48.713 [2024-12-09 05:15:40.184610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:20:48.713 [2024-12-09 05:15:40.184633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 01:20:48.713 [2024-12-09 05:15:40.184971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:20:48.713 [2024-12-09 05:15:40.185249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:20:48.713 [2024-12-09 05:15:40.185264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 01:20:48.713 [2024-12-09 05:15:40.185455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 01:20:48.713 [2024-12-09 05:15:40.292528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:20:48.713 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.972 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:48.972 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 01:20:48.972 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 01:20:48.972 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 01:20:48.972 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.972 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.972 [2024-12-09 05:15:40.344467] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:20:48.972 [2024-12-09 05:15:40.344610] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '53d40391-b64a-4948-8e14-71d175de9830' was resized: old size 131072, new size 204800 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 [2024-12-09 05:15:40.352448] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:20:48.973 [2024-12-09 05:15:40.352472] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ef5ff003-0323-46ab-b91b-25d3b29fbfb4' was resized: old size 131072, new size 204800 01:20:48.973 [2024-12-09 05:15:40.352505] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 01:20:48.973 [2024-12-09 05:15:40.464604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 [2024-12-09 05:15:40.520373] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 01:20:48.973 [2024-12-09 05:15:40.520632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 01:20:48.973 [2024-12-09 05:15:40.520708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 01:20:48.973 [2024-12-09 05:15:40.520984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:20:48.973 [2024-12-09 05:15:40.521476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:48.973 [2024-12-09 05:15:40.521709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:48.973 [2024-12-09 05:15:40.521741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 [2024-12-09 05:15:40.532299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 01:20:48.973 [2024-12-09 05:15:40.532509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:20:48.973 [2024-12-09 05:15:40.532577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 01:20:48.973 [2024-12-09 05:15:40.532674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:20:48.973 [2024-12-09 05:15:40.535557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:20:48.973 [2024-12-09 05:15:40.535739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 01:20:48.973 pt0 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 01:20:48.973 [2024-12-09 05:15:40.538012] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 53d40391-b64a-4948-8e14-71d175de9830 01:20:48.973 [2024-12-09 05:15:40.538109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 53d40391-b64a-4948-8e14-71d175de9830 is claimed 01:20:48.973 [2024-12-09 05:15:40.538236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ef5ff003-0323-46ab-b91b-25d3b29fbfb4 01:20:48.973 [2024-12-09 05:15:40.538336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ef5ff003-0323-46ab-b91b-25d3b29fbfb4 is claimed 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 [2024-12-09 05:15:40.538640] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ef5ff003-0323-46ab-b91b-25d3b29fbfb4 (2) smaller than existing raid bdev Raid (3) 01:20:48.973 [2024-12-09 05:15:40.538675] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 53d40391-b64a-4948-8e14-71d175de9830: File exists 01:20:48.973 [2024-12-09 05:15:40.538726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:20:48.973 [2024-12-09 05:15:40.538744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:20:48.973 [2024-12-09 05:15:40.539040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 [2024-12-09 05:15:40.539470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:20:48.973 [2024-12-09 05:15:40.539491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 01:20:48.973 [2024-12-09 05:15:40.539657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:48.973 [2024-12-09 05:15:40.556618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:20:48.973 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60000 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60000 ']' 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60000 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60000 01:20:49.232 killing process with pid 60000 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60000' 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60000 01:20:49.232 [2024-12-09 05:15:40.640854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:20:49.232 [2024-12-09 05:15:40.640906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:49.232 05:15:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60000 01:20:49.232 [2024-12-09 05:15:40.640954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:49.232 [2024-12-09 05:15:40.640966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 01:20:50.609 [2024-12-09 05:15:41.925539] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:20:51.609 05:15:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 01:20:51.609 01:20:51.609 real 0m4.748s 01:20:51.609 user 0m4.992s 01:20:51.609 sys 0m0.734s 01:20:51.609 05:15:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:51.609 ************************************ 01:20:51.609 END TEST raid1_resize_superblock_test 01:20:51.609 ************************************ 01:20:51.609 05:15:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:20:51.609 05:15:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 01:20:51.609 05:15:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 01:20:51.609 05:15:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 01:20:51.609 05:15:43 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 01:20:51.609 05:15:43 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 01:20:51.609 05:15:43 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 01:20:51.609 05:15:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:51.609 05:15:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:51.609 05:15:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:20:51.609 ************************************ 01:20:51.609 START TEST raid_function_test_raid0 01:20:51.609 ************************************ 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 01:20:51.609 Process raid pid: 60103 01:20:51.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60103 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60103' 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60103 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60103 ']' 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:51.609 05:15:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 01:20:51.609 [2024-12-09 05:15:43.202038] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:51.609 [2024-12-09 05:15:43.202452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:51.868 [2024-12-09 05:15:43.373020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:52.127 [2024-12-09 05:15:43.497709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:52.127 [2024-12-09 05:15:43.702959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:52.127 [2024-12-09 05:15:43.703310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 01:20:52.693 Base_1 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 01:20:52.693 Base_2 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 01:20:52.693 [2024-12-09 05:15:44.257247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 01:20:52.693 [2024-12-09 05:15:44.259811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 01:20:52.693 [2024-12-09 05:15:44.259891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:20:52.693 [2024-12-09 05:15:44.259909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:20:52.693 [2024-12-09 05:15:44.260171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:20:52.693 [2024-12-09 05:15:44.260338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:20:52.693 [2024-12-09 05:15:44.260372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 01:20:52.693 [2024-12-09 05:15:44.260531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:52.693 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:52.694 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 01:20:52.694 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:52.694 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 01:20:52.694 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 01:20:52.694 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:20:52.952 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 01:20:53.210 [2024-12-09 05:15:44.569328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:20:53.210 /dev/nbd0 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:20:53.210 1+0 records in 01:20:53.210 1+0 records out 01:20:53.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288984 s, 14.2 MB/s 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 01:20:53.210 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:20:53.467 { 01:20:53.467 "nbd_device": "/dev/nbd0", 01:20:53.467 "bdev_name": "raid" 01:20:53.467 } 01:20:53.467 ]' 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 01:20:53.467 { 01:20:53.467 "nbd_device": "/dev/nbd0", 01:20:53.467 "bdev_name": "raid" 01:20:53.467 } 01:20:53.467 ]' 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 01:20:53.467 05:15:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 01:20:53.467 4096+0 records in 01:20:53.467 4096+0 records out 01:20:53.467 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.031636 s, 66.3 MB/s 01:20:53.467 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 01:20:53.723 4096+0 records in 01:20:53.723 4096+0 records out 01:20:53.723 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.311593 s, 6.7 MB/s 01:20:53.723 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 01:20:53.980 128+0 records in 01:20:53.980 128+0 records out 01:20:53.980 65536 bytes (66 kB, 64 KiB) copied, 0.000619312 s, 106 MB/s 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 01:20:53.980 2035+0 records in 01:20:53.980 2035+0 records out 01:20:53.980 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00880287 s, 118 MB/s 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 01:20:53.980 456+0 records in 01:20:53.980 456+0 records out 01:20:53.980 233472 bytes (233 kB, 228 KiB) copied, 0.00248014 s, 94.1 MB/s 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:20:53.980 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:20:54.237 [2024-12-09 05:15:45.765739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 01:20:54.237 05:15:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 01:20:54.493 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:20:54.493 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 01:20:54.493 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60103 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60103 ']' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60103 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60103 01:20:54.751 killing process with pid 60103 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60103' 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60103 01:20:54.751 [2024-12-09 05:15:46.207062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:20:54.751 05:15:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60103 01:20:54.751 [2024-12-09 05:15:46.207170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:54.751 [2024-12-09 05:15:46.207227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:54.751 [2024-12-09 05:15:46.207249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 01:20:55.009 [2024-12-09 05:15:46.370474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:20:55.943 ************************************ 01:20:55.943 END TEST raid_function_test_raid0 01:20:55.943 ************************************ 01:20:55.943 05:15:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 01:20:55.943 01:20:55.943 real 0m4.308s 01:20:55.943 user 0m5.219s 01:20:55.943 sys 0m1.109s 01:20:55.943 05:15:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:55.943 05:15:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 01:20:55.943 05:15:47 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 01:20:55.943 05:15:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:55.943 05:15:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:55.943 05:15:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:20:55.943 ************************************ 01:20:55.943 START TEST raid_function_test_concat 01:20:55.943 ************************************ 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60237 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:20:55.943 Process raid pid: 60237 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60237' 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60237 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60237 ']' 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:55.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:55.943 05:15:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 01:20:56.202 [2024-12-09 05:15:47.566792] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:56.202 [2024-12-09 05:15:47.566949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:56.202 [2024-12-09 05:15:47.741605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:20:56.460 [2024-12-09 05:15:47.868990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:56.460 [2024-12-09 05:15:48.072064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:56.460 [2024-12-09 05:15:48.072147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 01:20:57.028 Base_1 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.028 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 01:20:57.029 Base_2 01:20:57.029 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.029 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 01:20:57.029 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.029 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 01:20:57.029 [2024-12-09 05:15:48.639443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 01:20:57.029 [2024-12-09 05:15:48.642055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 01:20:57.029 [2024-12-09 05:15:48.642475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:20:57.029 [2024-12-09 05:15:48.642501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:20:57.029 [2024-12-09 05:15:48.642796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:20:57.029 [2024-12-09 05:15:48.642986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:20:57.029 [2024-12-09 05:15:48.643001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 01:20:57.029 [2024-12-09 05:15:48.643204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:20:57.288 05:15:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 01:20:57.553 [2024-12-09 05:15:48.967520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:20:57.553 /dev/nbd0 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:20:57.553 1+0 records in 01:20:57.553 1+0 records out 01:20:57.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312932 s, 13.1 MB/s 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 01:20:57.553 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 01:20:57.812 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:20:57.812 { 01:20:57.812 "nbd_device": "/dev/nbd0", 01:20:57.812 "bdev_name": "raid" 01:20:57.812 } 01:20:57.812 ]' 01:20:57.812 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 01:20:57.812 { 01:20:57.812 "nbd_device": "/dev/nbd0", 01:20:57.812 "bdev_name": "raid" 01:20:57.812 } 01:20:57.812 ]' 01:20:57.812 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 01:20:58.070 4096+0 records in 01:20:58.070 4096+0 records out 01:20:58.070 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0302408 s, 69.3 MB/s 01:20:58.070 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 01:20:58.328 4096+0 records in 01:20:58.328 4096+0 records out 01:20:58.328 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.304833 s, 6.9 MB/s 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 01:20:58.328 128+0 records in 01:20:58.328 128+0 records out 01:20:58.328 65536 bytes (66 kB, 64 KiB) copied, 0.00103704 s, 63.2 MB/s 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 01:20:58.328 2035+0 records in 01:20:58.328 2035+0 records out 01:20:58.328 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.010064 s, 104 MB/s 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 01:20:58.328 456+0 records in 01:20:58.328 456+0 records out 01:20:58.328 233472 bytes (233 kB, 228 KiB) copied, 0.00323846 s, 72.1 MB/s 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 01:20:58.328 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:20:58.329 05:15:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:20:58.588 [2024-12-09 05:15:50.137666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 01:20:58.588 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 01:20:58.847 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:20:58.847 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:20:58.847 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60237 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60237 ']' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60237 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60237 01:20:59.154 killing process with pid 60237 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60237' 01:20:59.154 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60237 01:20:59.155 [2024-12-09 05:15:50.531213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:20:59.155 [2024-12-09 05:15:50.531402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:20:59.155 05:15:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60237 01:20:59.155 [2024-12-09 05:15:50.531501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:20:59.155 [2024-12-09 05:15:50.531522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 01:20:59.155 [2024-12-09 05:15:50.707672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:00.556 ************************************ 01:21:00.556 END TEST raid_function_test_concat 01:21:00.556 ************************************ 01:21:00.556 05:15:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 01:21:00.556 01:21:00.556 real 0m4.326s 01:21:00.556 user 0m5.240s 01:21:00.556 sys 0m1.059s 01:21:00.556 05:15:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:00.556 05:15:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 01:21:00.556 05:15:51 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 01:21:00.556 05:15:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:00.556 05:15:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:00.556 05:15:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:00.556 ************************************ 01:21:00.556 START TEST raid0_resize_test 01:21:00.556 ************************************ 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60366 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:00.556 Process raid pid: 60366 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60366' 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60366 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 01:21:00.556 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:00.557 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:00.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:00.557 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:00.557 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:00.557 05:15:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:00.557 [2024-12-09 05:15:51.977814] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:00.557 [2024-12-09 05:15:51.978042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:00.557 [2024-12-09 05:15:52.166143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:00.815 [2024-12-09 05:15:52.282047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:01.073 [2024-12-09 05:15:52.463478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:01.073 [2024-12-09 05:15:52.463561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.331 Base_1 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.331 Base_2 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.331 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.332 [2024-12-09 05:15:52.919581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 01:21:01.332 [2024-12-09 05:15:52.921771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 01:21:01.332 [2024-12-09 05:15:52.921903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:21:01.332 [2024-12-09 05:15:52.921921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:21:01.332 [2024-12-09 05:15:52.922220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 01:21:01.332 [2024-12-09 05:15:52.922391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:21:01.332 [2024-12-09 05:15:52.922413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 01:21:01.332 [2024-12-09 05:15:52.922565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.332 [2024-12-09 05:15:52.927524] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:21:01.332 [2024-12-09 05:15:52.927560] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 01:21:01.332 true 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.332 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 01:21:01.332 [2024-12-09 05:15:52.939757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.590 [2024-12-09 05:15:52.991577] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:21:01.590 [2024-12-09 05:15:52.991610] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 01:21:01.590 [2024-12-09 05:15:52.991645] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 01:21:01.590 true 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:01.590 05:15:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 01:21:01.590 [2024-12-09 05:15:53.003822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60366 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60366 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60366 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:01.590 killing process with pid 60366 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60366' 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60366 01:21:01.590 [2024-12-09 05:15:53.088983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:01.590 05:15:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60366 01:21:01.590 [2024-12-09 05:15:53.089063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:01.590 [2024-12-09 05:15:53.089156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:01.590 [2024-12-09 05:15:53.089198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 01:21:01.590 [2024-12-09 05:15:53.104545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:02.524 05:15:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 01:21:02.524 01:21:02.524 real 0m2.236s 01:21:02.524 user 0m2.423s 01:21:02.524 sys 0m0.403s 01:21:02.524 05:15:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:02.524 05:15:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:02.524 ************************************ 01:21:02.524 END TEST raid0_resize_test 01:21:02.524 ************************************ 01:21:02.783 05:15:54 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 01:21:02.783 05:15:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:02.783 05:15:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:02.783 05:15:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:02.783 ************************************ 01:21:02.783 START TEST raid1_resize_test 01:21:02.783 ************************************ 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60428 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60428' 01:21:02.783 Process raid pid: 60428 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60428 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60428 ']' 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:02.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:02.783 05:15:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:02.783 [2024-12-09 05:15:54.273238] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:02.783 [2024-12-09 05:15:54.273459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:03.041 [2024-12-09 05:15:54.458933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:03.041 [2024-12-09 05:15:54.589963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:03.298 [2024-12-09 05:15:54.788646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:03.298 [2024-12-09 05:15:54.788706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 Base_1 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 Base_2 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 [2024-12-09 05:15:55.213472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 01:21:03.866 [2024-12-09 05:15:55.215939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 01:21:03.866 [2024-12-09 05:15:55.216031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:21:03.866 [2024-12-09 05:15:55.216049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:21:03.866 [2024-12-09 05:15:55.216351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 01:21:03.866 [2024-12-09 05:15:55.216524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:21:03.866 [2024-12-09 05:15:55.216539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 01:21:03.866 [2024-12-09 05:15:55.216723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 [2024-12-09 05:15:55.221462] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:21:03.866 [2024-12-09 05:15:55.221499] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 01:21:03.866 true 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 01:21:03.866 [2024-12-09 05:15:55.233707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 [2024-12-09 05:15:55.285465] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 01:21:03.866 [2024-12-09 05:15:55.285495] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 01:21:03.866 [2024-12-09 05:15:55.285569] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 01:21:03.866 true 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 01:21:03.866 [2024-12-09 05:15:55.297724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60428 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60428 ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60428 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60428 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:03.866 killing process with pid 60428 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60428' 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60428 01:21:03.866 05:15:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60428 01:21:03.866 [2024-12-09 05:15:55.378987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:03.866 [2024-12-09 05:15:55.379097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:03.866 [2024-12-09 05:15:55.379771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:03.866 [2024-12-09 05:15:55.379817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 01:21:03.866 [2024-12-09 05:15:55.393724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:04.810 05:15:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 01:21:04.810 01:21:04.810 real 0m2.253s 01:21:04.810 user 0m2.425s 01:21:04.810 sys 0m0.414s 01:21:04.810 05:15:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:04.810 05:15:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 01:21:04.810 ************************************ 01:21:04.810 END TEST raid1_resize_test 01:21:04.810 ************************************ 01:21:05.067 05:15:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 01:21:05.067 05:15:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:21:05.067 05:15:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 01:21:05.067 05:15:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:05.067 05:15:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:05.067 05:15:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:05.067 ************************************ 01:21:05.067 START TEST raid_state_function_test 01:21:05.067 ************************************ 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60485 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:05.067 Process raid pid: 60485 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60485' 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60485 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60485 ']' 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:05.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:05.067 05:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:05.067 [2024-12-09 05:15:56.553763] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:05.067 [2024-12-09 05:15:56.553954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:05.325 [2024-12-09 05:15:56.725390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:05.325 [2024-12-09 05:15:56.837410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:05.582 [2024-12-09 05:15:57.037072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:05.583 [2024-12-09 05:15:57.037147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.148 [2024-12-09 05:15:57.506132] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:06.148 [2024-12-09 05:15:57.506221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:06.148 [2024-12-09 05:15:57.506240] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:06.148 [2024-12-09 05:15:57.506257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:06.148 "name": "Existed_Raid", 01:21:06.148 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:06.148 "strip_size_kb": 64, 01:21:06.148 "state": "configuring", 01:21:06.148 "raid_level": "raid0", 01:21:06.148 "superblock": false, 01:21:06.148 "num_base_bdevs": 2, 01:21:06.148 "num_base_bdevs_discovered": 0, 01:21:06.148 "num_base_bdevs_operational": 2, 01:21:06.148 "base_bdevs_list": [ 01:21:06.148 { 01:21:06.148 "name": "BaseBdev1", 01:21:06.148 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:06.148 "is_configured": false, 01:21:06.148 "data_offset": 0, 01:21:06.148 "data_size": 0 01:21:06.148 }, 01:21:06.148 { 01:21:06.148 "name": "BaseBdev2", 01:21:06.148 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:06.148 "is_configured": false, 01:21:06.148 "data_offset": 0, 01:21:06.148 "data_size": 0 01:21:06.148 } 01:21:06.148 ] 01:21:06.148 }' 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:06.148 05:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.406 [2024-12-09 05:15:58.014290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:06.406 [2024-12-09 05:15:58.014415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.406 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.667 [2024-12-09 05:15:58.022218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:06.667 [2024-12-09 05:15:58.022283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:06.667 [2024-12-09 05:15:58.022298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:06.667 [2024-12-09 05:15:58.022316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.667 [2024-12-09 05:15:58.064578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:06.667 BaseBdev1 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.667 [ 01:21:06.667 { 01:21:06.667 "name": "BaseBdev1", 01:21:06.667 "aliases": [ 01:21:06.667 "11c12f95-6720-4015-bb22-511b6d81201f" 01:21:06.667 ], 01:21:06.667 "product_name": "Malloc disk", 01:21:06.667 "block_size": 512, 01:21:06.667 "num_blocks": 65536, 01:21:06.667 "uuid": "11c12f95-6720-4015-bb22-511b6d81201f", 01:21:06.667 "assigned_rate_limits": { 01:21:06.667 "rw_ios_per_sec": 0, 01:21:06.667 "rw_mbytes_per_sec": 0, 01:21:06.667 "r_mbytes_per_sec": 0, 01:21:06.667 "w_mbytes_per_sec": 0 01:21:06.667 }, 01:21:06.667 "claimed": true, 01:21:06.667 "claim_type": "exclusive_write", 01:21:06.667 "zoned": false, 01:21:06.667 "supported_io_types": { 01:21:06.667 "read": true, 01:21:06.667 "write": true, 01:21:06.667 "unmap": true, 01:21:06.667 "flush": true, 01:21:06.667 "reset": true, 01:21:06.667 "nvme_admin": false, 01:21:06.667 "nvme_io": false, 01:21:06.667 "nvme_io_md": false, 01:21:06.667 "write_zeroes": true, 01:21:06.667 "zcopy": true, 01:21:06.667 "get_zone_info": false, 01:21:06.667 "zone_management": false, 01:21:06.667 "zone_append": false, 01:21:06.667 "compare": false, 01:21:06.667 "compare_and_write": false, 01:21:06.667 "abort": true, 01:21:06.667 "seek_hole": false, 01:21:06.667 "seek_data": false, 01:21:06.667 "copy": true, 01:21:06.667 "nvme_iov_md": false 01:21:06.667 }, 01:21:06.667 "memory_domains": [ 01:21:06.667 { 01:21:06.667 "dma_device_id": "system", 01:21:06.667 "dma_device_type": 1 01:21:06.667 }, 01:21:06.667 { 01:21:06.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:06.667 "dma_device_type": 2 01:21:06.667 } 01:21:06.667 ], 01:21:06.667 "driver_specific": {} 01:21:06.667 } 01:21:06.667 ] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.667 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:06.667 "name": "Existed_Raid", 01:21:06.667 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:06.667 "strip_size_kb": 64, 01:21:06.667 "state": "configuring", 01:21:06.667 "raid_level": "raid0", 01:21:06.667 "superblock": false, 01:21:06.667 "num_base_bdevs": 2, 01:21:06.667 "num_base_bdevs_discovered": 1, 01:21:06.667 "num_base_bdevs_operational": 2, 01:21:06.667 "base_bdevs_list": [ 01:21:06.667 { 01:21:06.667 "name": "BaseBdev1", 01:21:06.667 "uuid": "11c12f95-6720-4015-bb22-511b6d81201f", 01:21:06.667 "is_configured": true, 01:21:06.667 "data_offset": 0, 01:21:06.668 "data_size": 65536 01:21:06.668 }, 01:21:06.668 { 01:21:06.668 "name": "BaseBdev2", 01:21:06.668 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:06.668 "is_configured": false, 01:21:06.668 "data_offset": 0, 01:21:06.668 "data_size": 0 01:21:06.668 } 01:21:06.668 ] 01:21:06.668 }' 01:21:06.668 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:06.668 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.244 [2024-12-09 05:15:58.696874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:07.244 [2024-12-09 05:15:58.696954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.244 [2024-12-09 05:15:58.704885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:07.244 [2024-12-09 05:15:58.707512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:07.244 [2024-12-09 05:15:58.707583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:07.244 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:07.245 "name": "Existed_Raid", 01:21:07.245 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:07.245 "strip_size_kb": 64, 01:21:07.245 "state": "configuring", 01:21:07.245 "raid_level": "raid0", 01:21:07.245 "superblock": false, 01:21:07.245 "num_base_bdevs": 2, 01:21:07.245 "num_base_bdevs_discovered": 1, 01:21:07.245 "num_base_bdevs_operational": 2, 01:21:07.245 "base_bdevs_list": [ 01:21:07.245 { 01:21:07.245 "name": "BaseBdev1", 01:21:07.245 "uuid": "11c12f95-6720-4015-bb22-511b6d81201f", 01:21:07.245 "is_configured": true, 01:21:07.245 "data_offset": 0, 01:21:07.245 "data_size": 65536 01:21:07.245 }, 01:21:07.245 { 01:21:07.245 "name": "BaseBdev2", 01:21:07.245 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:07.245 "is_configured": false, 01:21:07.245 "data_offset": 0, 01:21:07.245 "data_size": 0 01:21:07.245 } 01:21:07.245 ] 01:21:07.245 }' 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:07.245 05:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.825 [2024-12-09 05:15:59.269047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:07.825 [2024-12-09 05:15:59.269133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:07.825 [2024-12-09 05:15:59.269148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:21:07.825 [2024-12-09 05:15:59.269559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:07.825 [2024-12-09 05:15:59.269830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:07.825 [2024-12-09 05:15:59.269863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:21:07.825 [2024-12-09 05:15:59.270195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:07.825 BaseBdev2 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.825 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.825 [ 01:21:07.825 { 01:21:07.825 "name": "BaseBdev2", 01:21:07.825 "aliases": [ 01:21:07.825 "44a311eb-3cd1-4f94-8141-7f54c29dbd8b" 01:21:07.825 ], 01:21:07.825 "product_name": "Malloc disk", 01:21:07.825 "block_size": 512, 01:21:07.825 "num_blocks": 65536, 01:21:07.825 "uuid": "44a311eb-3cd1-4f94-8141-7f54c29dbd8b", 01:21:07.825 "assigned_rate_limits": { 01:21:07.825 "rw_ios_per_sec": 0, 01:21:07.825 "rw_mbytes_per_sec": 0, 01:21:07.825 "r_mbytes_per_sec": 0, 01:21:07.825 "w_mbytes_per_sec": 0 01:21:07.825 }, 01:21:07.825 "claimed": true, 01:21:07.825 "claim_type": "exclusive_write", 01:21:07.826 "zoned": false, 01:21:07.826 "supported_io_types": { 01:21:07.826 "read": true, 01:21:07.826 "write": true, 01:21:07.826 "unmap": true, 01:21:07.826 "flush": true, 01:21:07.826 "reset": true, 01:21:07.826 "nvme_admin": false, 01:21:07.826 "nvme_io": false, 01:21:07.826 "nvme_io_md": false, 01:21:07.826 "write_zeroes": true, 01:21:07.826 "zcopy": true, 01:21:07.826 "get_zone_info": false, 01:21:07.826 "zone_management": false, 01:21:07.826 "zone_append": false, 01:21:07.826 "compare": false, 01:21:07.826 "compare_and_write": false, 01:21:07.826 "abort": true, 01:21:07.826 "seek_hole": false, 01:21:07.826 "seek_data": false, 01:21:07.826 "copy": true, 01:21:07.826 "nvme_iov_md": false 01:21:07.826 }, 01:21:07.826 "memory_domains": [ 01:21:07.826 { 01:21:07.826 "dma_device_id": "system", 01:21:07.826 "dma_device_type": 1 01:21:07.826 }, 01:21:07.826 { 01:21:07.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:07.826 "dma_device_type": 2 01:21:07.826 } 01:21:07.826 ], 01:21:07.826 "driver_specific": {} 01:21:07.826 } 01:21:07.826 ] 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:07.826 "name": "Existed_Raid", 01:21:07.826 "uuid": "922ef8dc-e913-43ec-bfde-a900f0d0d8fb", 01:21:07.826 "strip_size_kb": 64, 01:21:07.826 "state": "online", 01:21:07.826 "raid_level": "raid0", 01:21:07.826 "superblock": false, 01:21:07.826 "num_base_bdevs": 2, 01:21:07.826 "num_base_bdevs_discovered": 2, 01:21:07.826 "num_base_bdevs_operational": 2, 01:21:07.826 "base_bdevs_list": [ 01:21:07.826 { 01:21:07.826 "name": "BaseBdev1", 01:21:07.826 "uuid": "11c12f95-6720-4015-bb22-511b6d81201f", 01:21:07.826 "is_configured": true, 01:21:07.826 "data_offset": 0, 01:21:07.826 "data_size": 65536 01:21:07.826 }, 01:21:07.826 { 01:21:07.826 "name": "BaseBdev2", 01:21:07.826 "uuid": "44a311eb-3cd1-4f94-8141-7f54c29dbd8b", 01:21:07.826 "is_configured": true, 01:21:07.826 "data_offset": 0, 01:21:07.826 "data_size": 65536 01:21:07.826 } 01:21:07.826 ] 01:21:07.826 }' 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:07.826 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:08.392 [2024-12-09 05:15:59.833580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:08.392 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:08.392 "name": "Existed_Raid", 01:21:08.392 "aliases": [ 01:21:08.392 "922ef8dc-e913-43ec-bfde-a900f0d0d8fb" 01:21:08.392 ], 01:21:08.392 "product_name": "Raid Volume", 01:21:08.392 "block_size": 512, 01:21:08.392 "num_blocks": 131072, 01:21:08.392 "uuid": "922ef8dc-e913-43ec-bfde-a900f0d0d8fb", 01:21:08.392 "assigned_rate_limits": { 01:21:08.392 "rw_ios_per_sec": 0, 01:21:08.392 "rw_mbytes_per_sec": 0, 01:21:08.392 "r_mbytes_per_sec": 0, 01:21:08.393 "w_mbytes_per_sec": 0 01:21:08.393 }, 01:21:08.393 "claimed": false, 01:21:08.393 "zoned": false, 01:21:08.393 "supported_io_types": { 01:21:08.393 "read": true, 01:21:08.393 "write": true, 01:21:08.393 "unmap": true, 01:21:08.393 "flush": true, 01:21:08.393 "reset": true, 01:21:08.393 "nvme_admin": false, 01:21:08.393 "nvme_io": false, 01:21:08.393 "nvme_io_md": false, 01:21:08.393 "write_zeroes": true, 01:21:08.393 "zcopy": false, 01:21:08.393 "get_zone_info": false, 01:21:08.393 "zone_management": false, 01:21:08.393 "zone_append": false, 01:21:08.393 "compare": false, 01:21:08.393 "compare_and_write": false, 01:21:08.393 "abort": false, 01:21:08.393 "seek_hole": false, 01:21:08.393 "seek_data": false, 01:21:08.393 "copy": false, 01:21:08.393 "nvme_iov_md": false 01:21:08.393 }, 01:21:08.393 "memory_domains": [ 01:21:08.393 { 01:21:08.393 "dma_device_id": "system", 01:21:08.393 "dma_device_type": 1 01:21:08.393 }, 01:21:08.393 { 01:21:08.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:08.393 "dma_device_type": 2 01:21:08.393 }, 01:21:08.393 { 01:21:08.393 "dma_device_id": "system", 01:21:08.393 "dma_device_type": 1 01:21:08.393 }, 01:21:08.393 { 01:21:08.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:08.393 "dma_device_type": 2 01:21:08.393 } 01:21:08.393 ], 01:21:08.393 "driver_specific": { 01:21:08.393 "raid": { 01:21:08.393 "uuid": "922ef8dc-e913-43ec-bfde-a900f0d0d8fb", 01:21:08.393 "strip_size_kb": 64, 01:21:08.393 "state": "online", 01:21:08.393 "raid_level": "raid0", 01:21:08.393 "superblock": false, 01:21:08.393 "num_base_bdevs": 2, 01:21:08.393 "num_base_bdevs_discovered": 2, 01:21:08.393 "num_base_bdevs_operational": 2, 01:21:08.393 "base_bdevs_list": [ 01:21:08.393 { 01:21:08.393 "name": "BaseBdev1", 01:21:08.393 "uuid": "11c12f95-6720-4015-bb22-511b6d81201f", 01:21:08.393 "is_configured": true, 01:21:08.393 "data_offset": 0, 01:21:08.393 "data_size": 65536 01:21:08.393 }, 01:21:08.393 { 01:21:08.393 "name": "BaseBdev2", 01:21:08.393 "uuid": "44a311eb-3cd1-4f94-8141-7f54c29dbd8b", 01:21:08.393 "is_configured": true, 01:21:08.393 "data_offset": 0, 01:21:08.393 "data_size": 65536 01:21:08.393 } 01:21:08.393 ] 01:21:08.393 } 01:21:08.393 } 01:21:08.393 }' 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:21:08.393 BaseBdev2' 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:08.393 05:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:08.651 [2024-12-09 05:16:00.101374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:21:08.651 [2024-12-09 05:16:00.101431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:08.651 [2024-12-09 05:16:00.101522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:08.651 "name": "Existed_Raid", 01:21:08.651 "uuid": "922ef8dc-e913-43ec-bfde-a900f0d0d8fb", 01:21:08.651 "strip_size_kb": 64, 01:21:08.651 "state": "offline", 01:21:08.651 "raid_level": "raid0", 01:21:08.651 "superblock": false, 01:21:08.651 "num_base_bdevs": 2, 01:21:08.651 "num_base_bdevs_discovered": 1, 01:21:08.651 "num_base_bdevs_operational": 1, 01:21:08.651 "base_bdevs_list": [ 01:21:08.651 { 01:21:08.651 "name": null, 01:21:08.651 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:08.651 "is_configured": false, 01:21:08.651 "data_offset": 0, 01:21:08.651 "data_size": 65536 01:21:08.651 }, 01:21:08.651 { 01:21:08.651 "name": "BaseBdev2", 01:21:08.651 "uuid": "44a311eb-3cd1-4f94-8141-7f54c29dbd8b", 01:21:08.651 "is_configured": true, 01:21:08.651 "data_offset": 0, 01:21:08.651 "data_size": 65536 01:21:08.651 } 01:21:08.651 ] 01:21:08.651 }' 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:08.651 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:09.217 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:09.217 [2024-12-09 05:16:00.760210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:21:09.217 [2024-12-09 05:16:00.760305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60485 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60485 ']' 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60485 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60485 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:09.474 killing process with pid 60485 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60485' 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60485 01:21:09.474 [2024-12-09 05:16:00.940957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:09.474 05:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60485 01:21:09.474 [2024-12-09 05:16:00.955621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:21:10.849 01:21:10.849 real 0m5.586s 01:21:10.849 user 0m8.483s 01:21:10.849 sys 0m0.750s 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:10.849 ************************************ 01:21:10.849 END TEST raid_state_function_test 01:21:10.849 ************************************ 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:10.849 05:16:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 01:21:10.849 05:16:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:10.849 05:16:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:10.849 05:16:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:10.849 ************************************ 01:21:10.849 START TEST raid_state_function_test_sb 01:21:10.849 ************************************ 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60738 01:21:10.849 Process raid pid: 60738 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60738' 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60738 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60738 ']' 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:10.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:10.849 05:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:10.849 [2024-12-09 05:16:02.226003] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:10.849 [2024-12-09 05:16:02.226208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:10.849 [2024-12-09 05:16:02.410442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:11.138 [2024-12-09 05:16:02.536866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:11.138 [2024-12-09 05:16:02.745149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:11.138 [2024-12-09 05:16:02.745203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:11.704 [2024-12-09 05:16:03.233954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:11.704 [2024-12-09 05:16:03.234067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:11.704 [2024-12-09 05:16:03.234095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:11.704 [2024-12-09 05:16:03.234122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:11.704 "name": "Existed_Raid", 01:21:11.704 "uuid": "9d8a1729-0041-4d95-87c4-91eaa801114b", 01:21:11.704 "strip_size_kb": 64, 01:21:11.704 "state": "configuring", 01:21:11.704 "raid_level": "raid0", 01:21:11.704 "superblock": true, 01:21:11.704 "num_base_bdevs": 2, 01:21:11.704 "num_base_bdevs_discovered": 0, 01:21:11.704 "num_base_bdevs_operational": 2, 01:21:11.704 "base_bdevs_list": [ 01:21:11.704 { 01:21:11.704 "name": "BaseBdev1", 01:21:11.704 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:11.704 "is_configured": false, 01:21:11.704 "data_offset": 0, 01:21:11.704 "data_size": 0 01:21:11.704 }, 01:21:11.704 { 01:21:11.704 "name": "BaseBdev2", 01:21:11.704 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:11.704 "is_configured": false, 01:21:11.704 "data_offset": 0, 01:21:11.704 "data_size": 0 01:21:11.704 } 01:21:11.704 ] 01:21:11.704 }' 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:11.704 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.270 [2024-12-09 05:16:03.769930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:12.270 [2024-12-09 05:16:03.769981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.270 [2024-12-09 05:16:03.777875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:12.270 [2024-12-09 05:16:03.777946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:12.270 [2024-12-09 05:16:03.777962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:12.270 [2024-12-09 05:16:03.777981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.270 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.271 [2024-12-09 05:16:03.825807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:12.271 BaseBdev1 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.271 [ 01:21:12.271 { 01:21:12.271 "name": "BaseBdev1", 01:21:12.271 "aliases": [ 01:21:12.271 "02fa2931-6079-4cc0-869c-45d55e0e60cf" 01:21:12.271 ], 01:21:12.271 "product_name": "Malloc disk", 01:21:12.271 "block_size": 512, 01:21:12.271 "num_blocks": 65536, 01:21:12.271 "uuid": "02fa2931-6079-4cc0-869c-45d55e0e60cf", 01:21:12.271 "assigned_rate_limits": { 01:21:12.271 "rw_ios_per_sec": 0, 01:21:12.271 "rw_mbytes_per_sec": 0, 01:21:12.271 "r_mbytes_per_sec": 0, 01:21:12.271 "w_mbytes_per_sec": 0 01:21:12.271 }, 01:21:12.271 "claimed": true, 01:21:12.271 "claim_type": "exclusive_write", 01:21:12.271 "zoned": false, 01:21:12.271 "supported_io_types": { 01:21:12.271 "read": true, 01:21:12.271 "write": true, 01:21:12.271 "unmap": true, 01:21:12.271 "flush": true, 01:21:12.271 "reset": true, 01:21:12.271 "nvme_admin": false, 01:21:12.271 "nvme_io": false, 01:21:12.271 "nvme_io_md": false, 01:21:12.271 "write_zeroes": true, 01:21:12.271 "zcopy": true, 01:21:12.271 "get_zone_info": false, 01:21:12.271 "zone_management": false, 01:21:12.271 "zone_append": false, 01:21:12.271 "compare": false, 01:21:12.271 "compare_and_write": false, 01:21:12.271 "abort": true, 01:21:12.271 "seek_hole": false, 01:21:12.271 "seek_data": false, 01:21:12.271 "copy": true, 01:21:12.271 "nvme_iov_md": false 01:21:12.271 }, 01:21:12.271 "memory_domains": [ 01:21:12.271 { 01:21:12.271 "dma_device_id": "system", 01:21:12.271 "dma_device_type": 1 01:21:12.271 }, 01:21:12.271 { 01:21:12.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:12.271 "dma_device_type": 2 01:21:12.271 } 01:21:12.271 ], 01:21:12.271 "driver_specific": {} 01:21:12.271 } 01:21:12.271 ] 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.271 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.529 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:12.529 "name": "Existed_Raid", 01:21:12.529 "uuid": "f44603aa-9c29-4837-9106-a871f739840c", 01:21:12.529 "strip_size_kb": 64, 01:21:12.529 "state": "configuring", 01:21:12.529 "raid_level": "raid0", 01:21:12.529 "superblock": true, 01:21:12.529 "num_base_bdevs": 2, 01:21:12.529 "num_base_bdevs_discovered": 1, 01:21:12.529 "num_base_bdevs_operational": 2, 01:21:12.529 "base_bdevs_list": [ 01:21:12.529 { 01:21:12.529 "name": "BaseBdev1", 01:21:12.529 "uuid": "02fa2931-6079-4cc0-869c-45d55e0e60cf", 01:21:12.529 "is_configured": true, 01:21:12.529 "data_offset": 2048, 01:21:12.529 "data_size": 63488 01:21:12.529 }, 01:21:12.529 { 01:21:12.529 "name": "BaseBdev2", 01:21:12.529 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:12.529 "is_configured": false, 01:21:12.529 "data_offset": 0, 01:21:12.529 "data_size": 0 01:21:12.529 } 01:21:12.529 ] 01:21:12.529 }' 01:21:12.529 05:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:12.529 05:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:12.787 [2024-12-09 05:16:04.394047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:12.787 [2024-12-09 05:16:04.394124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.787 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.048 [2024-12-09 05:16:04.402211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:13.048 [2024-12-09 05:16:04.404974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:13.048 [2024-12-09 05:16:04.405212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.048 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:13.048 "name": "Existed_Raid", 01:21:13.048 "uuid": "f93fe74c-9aea-4fda-8f82-9d6fad30ce6b", 01:21:13.048 "strip_size_kb": 64, 01:21:13.048 "state": "configuring", 01:21:13.048 "raid_level": "raid0", 01:21:13.048 "superblock": true, 01:21:13.048 "num_base_bdevs": 2, 01:21:13.049 "num_base_bdevs_discovered": 1, 01:21:13.049 "num_base_bdevs_operational": 2, 01:21:13.049 "base_bdevs_list": [ 01:21:13.049 { 01:21:13.049 "name": "BaseBdev1", 01:21:13.049 "uuid": "02fa2931-6079-4cc0-869c-45d55e0e60cf", 01:21:13.049 "is_configured": true, 01:21:13.049 "data_offset": 2048, 01:21:13.049 "data_size": 63488 01:21:13.049 }, 01:21:13.049 { 01:21:13.049 "name": "BaseBdev2", 01:21:13.049 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:13.049 "is_configured": false, 01:21:13.049 "data_offset": 0, 01:21:13.049 "data_size": 0 01:21:13.049 } 01:21:13.049 ] 01:21:13.049 }' 01:21:13.049 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:13.049 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.615 [2024-12-09 05:16:04.971309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:13.615 [2024-12-09 05:16:04.972035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:13.615 BaseBdev2 01:21:13.615 [2024-12-09 05:16:04.972210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:13.615 [2024-12-09 05:16:04.972708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:13.615 [2024-12-09 05:16:04.972899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:13.615 [2024-12-09 05:16:04.972922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:21:13.615 [2024-12-09 05:16:04.973084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.615 05:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.615 [ 01:21:13.615 { 01:21:13.615 "name": "BaseBdev2", 01:21:13.615 "aliases": [ 01:21:13.615 "8b2b8aa7-5dfb-4026-91d3-60e4fd975bb0" 01:21:13.615 ], 01:21:13.615 "product_name": "Malloc disk", 01:21:13.615 "block_size": 512, 01:21:13.615 "num_blocks": 65536, 01:21:13.615 "uuid": "8b2b8aa7-5dfb-4026-91d3-60e4fd975bb0", 01:21:13.615 "assigned_rate_limits": { 01:21:13.615 "rw_ios_per_sec": 0, 01:21:13.615 "rw_mbytes_per_sec": 0, 01:21:13.615 "r_mbytes_per_sec": 0, 01:21:13.615 "w_mbytes_per_sec": 0 01:21:13.615 }, 01:21:13.615 "claimed": true, 01:21:13.615 "claim_type": "exclusive_write", 01:21:13.615 "zoned": false, 01:21:13.615 "supported_io_types": { 01:21:13.615 "read": true, 01:21:13.615 "write": true, 01:21:13.615 "unmap": true, 01:21:13.615 "flush": true, 01:21:13.615 "reset": true, 01:21:13.615 "nvme_admin": false, 01:21:13.615 "nvme_io": false, 01:21:13.615 "nvme_io_md": false, 01:21:13.615 "write_zeroes": true, 01:21:13.615 "zcopy": true, 01:21:13.615 "get_zone_info": false, 01:21:13.615 "zone_management": false, 01:21:13.615 "zone_append": false, 01:21:13.615 "compare": false, 01:21:13.615 "compare_and_write": false, 01:21:13.615 "abort": true, 01:21:13.615 "seek_hole": false, 01:21:13.615 "seek_data": false, 01:21:13.615 "copy": true, 01:21:13.615 "nvme_iov_md": false 01:21:13.615 }, 01:21:13.615 "memory_domains": [ 01:21:13.615 { 01:21:13.615 "dma_device_id": "system", 01:21:13.615 "dma_device_type": 1 01:21:13.615 }, 01:21:13.615 { 01:21:13.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:13.615 "dma_device_type": 2 01:21:13.615 } 01:21:13.615 ], 01:21:13.615 "driver_specific": {} 01:21:13.615 } 01:21:13.615 ] 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.615 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:13.616 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.616 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:13.616 "name": "Existed_Raid", 01:21:13.616 "uuid": "f93fe74c-9aea-4fda-8f82-9d6fad30ce6b", 01:21:13.616 "strip_size_kb": 64, 01:21:13.616 "state": "online", 01:21:13.616 "raid_level": "raid0", 01:21:13.616 "superblock": true, 01:21:13.616 "num_base_bdevs": 2, 01:21:13.616 "num_base_bdevs_discovered": 2, 01:21:13.616 "num_base_bdevs_operational": 2, 01:21:13.616 "base_bdevs_list": [ 01:21:13.616 { 01:21:13.616 "name": "BaseBdev1", 01:21:13.616 "uuid": "02fa2931-6079-4cc0-869c-45d55e0e60cf", 01:21:13.616 "is_configured": true, 01:21:13.616 "data_offset": 2048, 01:21:13.616 "data_size": 63488 01:21:13.616 }, 01:21:13.616 { 01:21:13.616 "name": "BaseBdev2", 01:21:13.616 "uuid": "8b2b8aa7-5dfb-4026-91d3-60e4fd975bb0", 01:21:13.616 "is_configured": true, 01:21:13.616 "data_offset": 2048, 01:21:13.616 "data_size": 63488 01:21:13.616 } 01:21:13.616 ] 01:21:13.616 }' 01:21:13.616 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:13.616 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:14.185 [2024-12-09 05:16:05.583936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:14.185 "name": "Existed_Raid", 01:21:14.185 "aliases": [ 01:21:14.185 "f93fe74c-9aea-4fda-8f82-9d6fad30ce6b" 01:21:14.185 ], 01:21:14.185 "product_name": "Raid Volume", 01:21:14.185 "block_size": 512, 01:21:14.185 "num_blocks": 126976, 01:21:14.185 "uuid": "f93fe74c-9aea-4fda-8f82-9d6fad30ce6b", 01:21:14.185 "assigned_rate_limits": { 01:21:14.185 "rw_ios_per_sec": 0, 01:21:14.185 "rw_mbytes_per_sec": 0, 01:21:14.185 "r_mbytes_per_sec": 0, 01:21:14.185 "w_mbytes_per_sec": 0 01:21:14.185 }, 01:21:14.185 "claimed": false, 01:21:14.185 "zoned": false, 01:21:14.185 "supported_io_types": { 01:21:14.185 "read": true, 01:21:14.185 "write": true, 01:21:14.185 "unmap": true, 01:21:14.185 "flush": true, 01:21:14.185 "reset": true, 01:21:14.185 "nvme_admin": false, 01:21:14.185 "nvme_io": false, 01:21:14.185 "nvme_io_md": false, 01:21:14.185 "write_zeroes": true, 01:21:14.185 "zcopy": false, 01:21:14.185 "get_zone_info": false, 01:21:14.185 "zone_management": false, 01:21:14.185 "zone_append": false, 01:21:14.185 "compare": false, 01:21:14.185 "compare_and_write": false, 01:21:14.185 "abort": false, 01:21:14.185 "seek_hole": false, 01:21:14.185 "seek_data": false, 01:21:14.185 "copy": false, 01:21:14.185 "nvme_iov_md": false 01:21:14.185 }, 01:21:14.185 "memory_domains": [ 01:21:14.185 { 01:21:14.185 "dma_device_id": "system", 01:21:14.185 "dma_device_type": 1 01:21:14.185 }, 01:21:14.185 { 01:21:14.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:14.185 "dma_device_type": 2 01:21:14.185 }, 01:21:14.185 { 01:21:14.185 "dma_device_id": "system", 01:21:14.185 "dma_device_type": 1 01:21:14.185 }, 01:21:14.185 { 01:21:14.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:14.185 "dma_device_type": 2 01:21:14.185 } 01:21:14.185 ], 01:21:14.185 "driver_specific": { 01:21:14.185 "raid": { 01:21:14.185 "uuid": "f93fe74c-9aea-4fda-8f82-9d6fad30ce6b", 01:21:14.185 "strip_size_kb": 64, 01:21:14.185 "state": "online", 01:21:14.185 "raid_level": "raid0", 01:21:14.185 "superblock": true, 01:21:14.185 "num_base_bdevs": 2, 01:21:14.185 "num_base_bdevs_discovered": 2, 01:21:14.185 "num_base_bdevs_operational": 2, 01:21:14.185 "base_bdevs_list": [ 01:21:14.185 { 01:21:14.185 "name": "BaseBdev1", 01:21:14.185 "uuid": "02fa2931-6079-4cc0-869c-45d55e0e60cf", 01:21:14.185 "is_configured": true, 01:21:14.185 "data_offset": 2048, 01:21:14.185 "data_size": 63488 01:21:14.185 }, 01:21:14.185 { 01:21:14.185 "name": "BaseBdev2", 01:21:14.185 "uuid": "8b2b8aa7-5dfb-4026-91d3-60e4fd975bb0", 01:21:14.185 "is_configured": true, 01:21:14.185 "data_offset": 2048, 01:21:14.185 "data_size": 63488 01:21:14.185 } 01:21:14.185 ] 01:21:14.185 } 01:21:14.185 } 01:21:14.185 }' 01:21:14.185 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:21:14.186 BaseBdev2' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:14.186 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:14.444 [2024-12-09 05:16:05.851770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:21:14.444 [2024-12-09 05:16:05.851811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:14.444 [2024-12-09 05:16:05.851874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:14.444 "name": "Existed_Raid", 01:21:14.444 "uuid": "f93fe74c-9aea-4fda-8f82-9d6fad30ce6b", 01:21:14.444 "strip_size_kb": 64, 01:21:14.444 "state": "offline", 01:21:14.444 "raid_level": "raid0", 01:21:14.444 "superblock": true, 01:21:14.444 "num_base_bdevs": 2, 01:21:14.444 "num_base_bdevs_discovered": 1, 01:21:14.444 "num_base_bdevs_operational": 1, 01:21:14.444 "base_bdevs_list": [ 01:21:14.444 { 01:21:14.444 "name": null, 01:21:14.444 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:14.444 "is_configured": false, 01:21:14.444 "data_offset": 0, 01:21:14.444 "data_size": 63488 01:21:14.444 }, 01:21:14.444 { 01:21:14.444 "name": "BaseBdev2", 01:21:14.444 "uuid": "8b2b8aa7-5dfb-4026-91d3-60e4fd975bb0", 01:21:14.444 "is_configured": true, 01:21:14.444 "data_offset": 2048, 01:21:14.444 "data_size": 63488 01:21:14.444 } 01:21:14.444 ] 01:21:14.444 }' 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:14.444 05:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:15.010 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:21:15.010 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:15.011 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:15.011 [2024-12-09 05:16:06.528200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:21:15.011 [2024-12-09 05:16:06.528344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:21:15.270 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:15.270 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:21:15.270 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:15.270 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:15.270 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60738 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60738 ']' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60738 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60738 01:21:15.271 killing process with pid 60738 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60738' 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60738 01:21:15.271 05:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60738 01:21:15.271 [2024-12-09 05:16:06.731374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:15.271 [2024-12-09 05:16:06.750531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:16.659 05:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:21:16.659 ************************************ 01:21:16.659 END TEST raid_state_function_test_sb 01:21:16.659 ************************************ 01:21:16.659 01:21:16.659 real 0m5.908s 01:21:16.659 user 0m8.813s 01:21:16.659 sys 0m0.841s 01:21:16.659 05:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:16.659 05:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:16.659 05:16:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 01:21:16.659 05:16:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:21:16.659 05:16:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:16.659 05:16:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:16.659 ************************************ 01:21:16.659 START TEST raid_superblock_test 01:21:16.659 ************************************ 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61001 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61001 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61001 ']' 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:16.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:16.659 05:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:16.659 [2024-12-09 05:16:08.189761] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:16.659 [2024-12-09 05:16:08.190295] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61001 ] 01:21:16.919 [2024-12-09 05:16:08.374352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:16.919 [2024-12-09 05:16:08.491488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:17.177 [2024-12-09 05:16:08.701538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:17.177 [2024-12-09 05:16:08.701589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:17.744 malloc1 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:17.744 [2024-12-09 05:16:09.151905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:21:17.744 [2024-12-09 05:16:09.152255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:17.744 [2024-12-09 05:16:09.152328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:21:17.744 [2024-12-09 05:16:09.152648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:17.744 [2024-12-09 05:16:09.155263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:17.744 [2024-12-09 05:16:09.155453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:21:17.744 pt1 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:17.744 malloc2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:17.744 [2024-12-09 05:16:09.203950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:21:17.744 [2024-12-09 05:16:09.204017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:17.744 [2024-12-09 05:16:09.204052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:21:17.744 [2024-12-09 05:16:09.204066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:17.744 [2024-12-09 05:16:09.206667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:17.744 [2024-12-09 05:16:09.206706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:21:17.744 pt2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:17.744 [2024-12-09 05:16:09.216015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:21:17.744 [2024-12-09 05:16:09.218287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:21:17.744 [2024-12-09 05:16:09.218628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:21:17.744 [2024-12-09 05:16:09.218651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:17.744 [2024-12-09 05:16:09.218925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:17.744 [2024-12-09 05:16:09.219113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:21:17.744 [2024-12-09 05:16:09.219131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:21:17.744 [2024-12-09 05:16:09.219289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:17.744 "name": "raid_bdev1", 01:21:17.744 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:17.744 "strip_size_kb": 64, 01:21:17.744 "state": "online", 01:21:17.744 "raid_level": "raid0", 01:21:17.744 "superblock": true, 01:21:17.744 "num_base_bdevs": 2, 01:21:17.744 "num_base_bdevs_discovered": 2, 01:21:17.744 "num_base_bdevs_operational": 2, 01:21:17.744 "base_bdevs_list": [ 01:21:17.744 { 01:21:17.744 "name": "pt1", 01:21:17.744 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:17.744 "is_configured": true, 01:21:17.744 "data_offset": 2048, 01:21:17.744 "data_size": 63488 01:21:17.744 }, 01:21:17.744 { 01:21:17.744 "name": "pt2", 01:21:17.744 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:17.744 "is_configured": true, 01:21:17.744 "data_offset": 2048, 01:21:17.744 "data_size": 63488 01:21:17.744 } 01:21:17.744 ] 01:21:17.744 }' 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:17.744 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.311 [2024-12-09 05:16:09.764374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:18.311 "name": "raid_bdev1", 01:21:18.311 "aliases": [ 01:21:18.311 "b2bc4472-228f-410a-931f-5562e86f0310" 01:21:18.311 ], 01:21:18.311 "product_name": "Raid Volume", 01:21:18.311 "block_size": 512, 01:21:18.311 "num_blocks": 126976, 01:21:18.311 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:18.311 "assigned_rate_limits": { 01:21:18.311 "rw_ios_per_sec": 0, 01:21:18.311 "rw_mbytes_per_sec": 0, 01:21:18.311 "r_mbytes_per_sec": 0, 01:21:18.311 "w_mbytes_per_sec": 0 01:21:18.311 }, 01:21:18.311 "claimed": false, 01:21:18.311 "zoned": false, 01:21:18.311 "supported_io_types": { 01:21:18.311 "read": true, 01:21:18.311 "write": true, 01:21:18.311 "unmap": true, 01:21:18.311 "flush": true, 01:21:18.311 "reset": true, 01:21:18.311 "nvme_admin": false, 01:21:18.311 "nvme_io": false, 01:21:18.311 "nvme_io_md": false, 01:21:18.311 "write_zeroes": true, 01:21:18.311 "zcopy": false, 01:21:18.311 "get_zone_info": false, 01:21:18.311 "zone_management": false, 01:21:18.311 "zone_append": false, 01:21:18.311 "compare": false, 01:21:18.311 "compare_and_write": false, 01:21:18.311 "abort": false, 01:21:18.311 "seek_hole": false, 01:21:18.311 "seek_data": false, 01:21:18.311 "copy": false, 01:21:18.311 "nvme_iov_md": false 01:21:18.311 }, 01:21:18.311 "memory_domains": [ 01:21:18.311 { 01:21:18.311 "dma_device_id": "system", 01:21:18.311 "dma_device_type": 1 01:21:18.311 }, 01:21:18.311 { 01:21:18.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:18.311 "dma_device_type": 2 01:21:18.311 }, 01:21:18.311 { 01:21:18.311 "dma_device_id": "system", 01:21:18.311 "dma_device_type": 1 01:21:18.311 }, 01:21:18.311 { 01:21:18.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:18.311 "dma_device_type": 2 01:21:18.311 } 01:21:18.311 ], 01:21:18.311 "driver_specific": { 01:21:18.311 "raid": { 01:21:18.311 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:18.311 "strip_size_kb": 64, 01:21:18.311 "state": "online", 01:21:18.311 "raid_level": "raid0", 01:21:18.311 "superblock": true, 01:21:18.311 "num_base_bdevs": 2, 01:21:18.311 "num_base_bdevs_discovered": 2, 01:21:18.311 "num_base_bdevs_operational": 2, 01:21:18.311 "base_bdevs_list": [ 01:21:18.311 { 01:21:18.311 "name": "pt1", 01:21:18.311 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:18.311 "is_configured": true, 01:21:18.311 "data_offset": 2048, 01:21:18.311 "data_size": 63488 01:21:18.311 }, 01:21:18.311 { 01:21:18.311 "name": "pt2", 01:21:18.311 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:18.311 "is_configured": true, 01:21:18.311 "data_offset": 2048, 01:21:18.311 "data_size": 63488 01:21:18.311 } 01:21:18.311 ] 01:21:18.311 } 01:21:18.311 } 01:21:18.311 }' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:21:18.311 pt2' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.311 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.569 05:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.569 [2024-12-09 05:16:10.036503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2bc4472-228f-410a-931f-5562e86f0310 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b2bc4472-228f-410a-931f-5562e86f0310 ']' 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.569 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.569 [2024-12-09 05:16:10.084154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:18.569 [2024-12-09 05:16:10.084180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:18.570 [2024-12-09 05:16:10.084265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:18.570 [2024-12-09 05:16:10.084325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:18.570 [2024-12-09 05:16:10.084344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.570 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.827 [2024-12-09 05:16:10.228233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:21:18.827 [2024-12-09 05:16:10.230745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:21:18.827 [2024-12-09 05:16:10.230827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:21:18.827 [2024-12-09 05:16:10.230894] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:21:18.827 [2024-12-09 05:16:10.230918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:18.827 [2024-12-09 05:16:10.230935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:21:18.827 request: 01:21:18.827 { 01:21:18.827 "name": "raid_bdev1", 01:21:18.827 "raid_level": "raid0", 01:21:18.827 "base_bdevs": [ 01:21:18.827 "malloc1", 01:21:18.827 "malloc2" 01:21:18.827 ], 01:21:18.827 "strip_size_kb": 64, 01:21:18.827 "superblock": false, 01:21:18.827 "method": "bdev_raid_create", 01:21:18.827 "req_id": 1 01:21:18.827 } 01:21:18.827 Got JSON-RPC error response 01:21:18.827 response: 01:21:18.827 { 01:21:18.827 "code": -17, 01:21:18.827 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:21:18.827 } 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.827 [2024-12-09 05:16:10.292221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:21:18.827 [2024-12-09 05:16:10.292586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:18.827 [2024-12-09 05:16:10.292747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:21:18.827 [2024-12-09 05:16:10.292872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:18.827 [2024-12-09 05:16:10.296228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:18.827 [2024-12-09 05:16:10.296401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:21:18.827 [2024-12-09 05:16:10.296647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:21:18.827 [2024-12-09 05:16:10.296907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:21:18.827 pt1 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:18.827 "name": "raid_bdev1", 01:21:18.827 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:18.827 "strip_size_kb": 64, 01:21:18.827 "state": "configuring", 01:21:18.827 "raid_level": "raid0", 01:21:18.827 "superblock": true, 01:21:18.827 "num_base_bdevs": 2, 01:21:18.827 "num_base_bdevs_discovered": 1, 01:21:18.827 "num_base_bdevs_operational": 2, 01:21:18.827 "base_bdevs_list": [ 01:21:18.827 { 01:21:18.827 "name": "pt1", 01:21:18.827 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:18.827 "is_configured": true, 01:21:18.827 "data_offset": 2048, 01:21:18.827 "data_size": 63488 01:21:18.827 }, 01:21:18.827 { 01:21:18.827 "name": null, 01:21:18.827 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:18.827 "is_configured": false, 01:21:18.827 "data_offset": 2048, 01:21:18.827 "data_size": 63488 01:21:18.827 } 01:21:18.827 ] 01:21:18.827 }' 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:18.827 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:19.393 [2024-12-09 05:16:10.841020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:21:19.393 [2024-12-09 05:16:10.841142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:19.393 [2024-12-09 05:16:10.841175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:21:19.393 [2024-12-09 05:16:10.841194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:19.393 [2024-12-09 05:16:10.841900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:19.393 [2024-12-09 05:16:10.841938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:21:19.393 [2024-12-09 05:16:10.842100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:21:19.393 [2024-12-09 05:16:10.842141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:21:19.393 [2024-12-09 05:16:10.842290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:19.393 [2024-12-09 05:16:10.842319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:19.393 [2024-12-09 05:16:10.842663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:21:19.393 [2024-12-09 05:16:10.842901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:19.393 [2024-12-09 05:16:10.842938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:21:19.393 [2024-12-09 05:16:10.843153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:19.393 pt2 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:19.393 "name": "raid_bdev1", 01:21:19.393 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:19.393 "strip_size_kb": 64, 01:21:19.393 "state": "online", 01:21:19.393 "raid_level": "raid0", 01:21:19.393 "superblock": true, 01:21:19.393 "num_base_bdevs": 2, 01:21:19.393 "num_base_bdevs_discovered": 2, 01:21:19.393 "num_base_bdevs_operational": 2, 01:21:19.393 "base_bdevs_list": [ 01:21:19.393 { 01:21:19.393 "name": "pt1", 01:21:19.393 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:19.393 "is_configured": true, 01:21:19.393 "data_offset": 2048, 01:21:19.393 "data_size": 63488 01:21:19.393 }, 01:21:19.393 { 01:21:19.393 "name": "pt2", 01:21:19.393 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:19.393 "is_configured": true, 01:21:19.393 "data_offset": 2048, 01:21:19.393 "data_size": 63488 01:21:19.393 } 01:21:19.393 ] 01:21:19.393 }' 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:19.393 05:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:19.960 [2024-12-09 05:16:11.377556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:19.960 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:19.960 "name": "raid_bdev1", 01:21:19.960 "aliases": [ 01:21:19.960 "b2bc4472-228f-410a-931f-5562e86f0310" 01:21:19.960 ], 01:21:19.960 "product_name": "Raid Volume", 01:21:19.960 "block_size": 512, 01:21:19.960 "num_blocks": 126976, 01:21:19.960 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:19.960 "assigned_rate_limits": { 01:21:19.960 "rw_ios_per_sec": 0, 01:21:19.960 "rw_mbytes_per_sec": 0, 01:21:19.960 "r_mbytes_per_sec": 0, 01:21:19.960 "w_mbytes_per_sec": 0 01:21:19.960 }, 01:21:19.960 "claimed": false, 01:21:19.960 "zoned": false, 01:21:19.960 "supported_io_types": { 01:21:19.960 "read": true, 01:21:19.960 "write": true, 01:21:19.960 "unmap": true, 01:21:19.960 "flush": true, 01:21:19.960 "reset": true, 01:21:19.961 "nvme_admin": false, 01:21:19.961 "nvme_io": false, 01:21:19.961 "nvme_io_md": false, 01:21:19.961 "write_zeroes": true, 01:21:19.961 "zcopy": false, 01:21:19.961 "get_zone_info": false, 01:21:19.961 "zone_management": false, 01:21:19.961 "zone_append": false, 01:21:19.961 "compare": false, 01:21:19.961 "compare_and_write": false, 01:21:19.961 "abort": false, 01:21:19.961 "seek_hole": false, 01:21:19.961 "seek_data": false, 01:21:19.961 "copy": false, 01:21:19.961 "nvme_iov_md": false 01:21:19.961 }, 01:21:19.961 "memory_domains": [ 01:21:19.961 { 01:21:19.961 "dma_device_id": "system", 01:21:19.961 "dma_device_type": 1 01:21:19.961 }, 01:21:19.961 { 01:21:19.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:19.961 "dma_device_type": 2 01:21:19.961 }, 01:21:19.961 { 01:21:19.961 "dma_device_id": "system", 01:21:19.961 "dma_device_type": 1 01:21:19.961 }, 01:21:19.961 { 01:21:19.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:19.961 "dma_device_type": 2 01:21:19.961 } 01:21:19.961 ], 01:21:19.961 "driver_specific": { 01:21:19.961 "raid": { 01:21:19.961 "uuid": "b2bc4472-228f-410a-931f-5562e86f0310", 01:21:19.961 "strip_size_kb": 64, 01:21:19.961 "state": "online", 01:21:19.961 "raid_level": "raid0", 01:21:19.961 "superblock": true, 01:21:19.961 "num_base_bdevs": 2, 01:21:19.961 "num_base_bdevs_discovered": 2, 01:21:19.961 "num_base_bdevs_operational": 2, 01:21:19.961 "base_bdevs_list": [ 01:21:19.961 { 01:21:19.961 "name": "pt1", 01:21:19.961 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:19.961 "is_configured": true, 01:21:19.961 "data_offset": 2048, 01:21:19.961 "data_size": 63488 01:21:19.961 }, 01:21:19.961 { 01:21:19.961 "name": "pt2", 01:21:19.961 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:19.961 "is_configured": true, 01:21:19.961 "data_offset": 2048, 01:21:19.961 "data_size": 63488 01:21:19.961 } 01:21:19.961 ] 01:21:19.961 } 01:21:19.961 } 01:21:19.961 }' 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:21:19.961 pt2' 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:19.961 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:20.219 [2024-12-09 05:16:11.641531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b2bc4472-228f-410a-931f-5562e86f0310 '!=' b2bc4472-228f-410a-931f-5562e86f0310 ']' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61001 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61001 ']' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61001 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61001 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61001' 01:21:20.219 killing process with pid 61001 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61001 01:21:20.219 [2024-12-09 05:16:11.722650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:20.219 05:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61001 01:21:20.219 [2024-12-09 05:16:11.722754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:20.219 [2024-12-09 05:16:11.722826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:20.219 [2024-12-09 05:16:11.722846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:21:20.478 [2024-12-09 05:16:11.936426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:21.854 05:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:21:21.854 01:21:21.854 real 0m4.965s 01:21:21.854 user 0m7.208s 01:21:21.854 sys 0m0.797s 01:21:21.854 ************************************ 01:21:21.854 END TEST raid_superblock_test 01:21:21.854 ************************************ 01:21:21.854 05:16:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:21.854 05:16:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:21.854 05:16:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 01:21:21.854 05:16:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:21.855 05:16:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:21.855 05:16:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:21.855 ************************************ 01:21:21.855 START TEST raid_read_error_test 01:21:21.855 ************************************ 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Dpr97KI61S 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61218 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61218 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61218 ']' 01:21:21.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:21.855 05:16:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:21.855 [2024-12-09 05:16:13.195564] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:21.855 [2024-12-09 05:16:13.195718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61218 ] 01:21:21.855 [2024-12-09 05:16:13.372125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:22.114 [2024-12-09 05:16:13.540501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:22.389 [2024-12-09 05:16:13.749970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:22.390 [2024-12-09 05:16:13.750025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:22.648 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:22.648 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:21:22.648 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:22.648 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:21:22.648 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.648 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.907 BaseBdev1_malloc 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.907 true 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.907 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.907 [2024-12-09 05:16:14.302594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:21:22.908 [2024-12-09 05:16:14.302677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:22.908 [2024-12-09 05:16:14.302705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:21:22.908 [2024-12-09 05:16:14.302722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:22.908 [2024-12-09 05:16:14.305537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:22.908 [2024-12-09 05:16:14.305603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:21:22.908 BaseBdev1 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.908 BaseBdev2_malloc 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.908 true 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.908 [2024-12-09 05:16:14.364137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:21:22.908 [2024-12-09 05:16:14.364212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:22.908 [2024-12-09 05:16:14.364237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:21:22.908 [2024-12-09 05:16:14.364253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:22.908 [2024-12-09 05:16:14.367054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:22.908 [2024-12-09 05:16:14.367113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:21:22.908 BaseBdev2 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.908 [2024-12-09 05:16:14.372191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:22.908 [2024-12-09 05:16:14.374599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:22.908 [2024-12-09 05:16:14.374878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:22.908 [2024-12-09 05:16:14.374903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:22.908 [2024-12-09 05:16:14.375171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:21:22.908 [2024-12-09 05:16:14.375395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:22.908 [2024-12-09 05:16:14.375416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:21:22.908 [2024-12-09 05:16:14.375592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:22.908 "name": "raid_bdev1", 01:21:22.908 "uuid": "51bd86b9-a9f5-4556-a4ab-9ca12d146d82", 01:21:22.908 "strip_size_kb": 64, 01:21:22.908 "state": "online", 01:21:22.908 "raid_level": "raid0", 01:21:22.908 "superblock": true, 01:21:22.908 "num_base_bdevs": 2, 01:21:22.908 "num_base_bdevs_discovered": 2, 01:21:22.908 "num_base_bdevs_operational": 2, 01:21:22.908 "base_bdevs_list": [ 01:21:22.908 { 01:21:22.908 "name": "BaseBdev1", 01:21:22.908 "uuid": "4b49ac14-63b5-5045-8036-0eb32b6fdeab", 01:21:22.908 "is_configured": true, 01:21:22.908 "data_offset": 2048, 01:21:22.908 "data_size": 63488 01:21:22.908 }, 01:21:22.908 { 01:21:22.908 "name": "BaseBdev2", 01:21:22.908 "uuid": "583a78e1-abdf-5737-8a4d-095dab06728f", 01:21:22.908 "is_configured": true, 01:21:22.908 "data_offset": 2048, 01:21:22.908 "data_size": 63488 01:21:22.908 } 01:21:22.908 ] 01:21:22.908 }' 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:22.908 05:16:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:23.474 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:21:23.474 05:16:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:21:23.474 [2024-12-09 05:16:15.023100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:21:24.409 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:21:24.409 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:24.409 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:24.410 "name": "raid_bdev1", 01:21:24.410 "uuid": "51bd86b9-a9f5-4556-a4ab-9ca12d146d82", 01:21:24.410 "strip_size_kb": 64, 01:21:24.410 "state": "online", 01:21:24.410 "raid_level": "raid0", 01:21:24.410 "superblock": true, 01:21:24.410 "num_base_bdevs": 2, 01:21:24.410 "num_base_bdevs_discovered": 2, 01:21:24.410 "num_base_bdevs_operational": 2, 01:21:24.410 "base_bdevs_list": [ 01:21:24.410 { 01:21:24.410 "name": "BaseBdev1", 01:21:24.410 "uuid": "4b49ac14-63b5-5045-8036-0eb32b6fdeab", 01:21:24.410 "is_configured": true, 01:21:24.410 "data_offset": 2048, 01:21:24.410 "data_size": 63488 01:21:24.410 }, 01:21:24.410 { 01:21:24.410 "name": "BaseBdev2", 01:21:24.410 "uuid": "583a78e1-abdf-5737-8a4d-095dab06728f", 01:21:24.410 "is_configured": true, 01:21:24.410 "data_offset": 2048, 01:21:24.410 "data_size": 63488 01:21:24.410 } 01:21:24.410 ] 01:21:24.410 }' 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:24.410 05:16:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:24.977 [2024-12-09 05:16:16.462344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:24.977 [2024-12-09 05:16:16.462427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:24.977 [2024-12-09 05:16:16.466431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:24.977 [2024-12-09 05:16:16.466676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:24.977 [2024-12-09 05:16:16.466866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:24.977 [2024-12-09 05:16:16.467127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:21:24.977 { 01:21:24.977 "results": [ 01:21:24.977 { 01:21:24.977 "job": "raid_bdev1", 01:21:24.977 "core_mask": "0x1", 01:21:24.977 "workload": "randrw", 01:21:24.977 "percentage": 50, 01:21:24.977 "status": "finished", 01:21:24.977 "queue_depth": 1, 01:21:24.977 "io_size": 131072, 01:21:24.977 "runtime": 1.436935, 01:21:24.977 "iops": 10969.180930243887, 01:21:24.977 "mibps": 1371.1476162804859, 01:21:24.977 "io_failed": 1, 01:21:24.977 "io_timeout": 0, 01:21:24.977 "avg_latency_us": 127.91245598149867, 01:21:24.977 "min_latency_us": 35.14181818181818, 01:21:24.977 "max_latency_us": 1995.8690909090908 01:21:24.977 } 01:21:24.977 ], 01:21:24.977 "core_count": 1 01:21:24.977 } 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61218 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61218 ']' 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61218 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61218 01:21:24.977 killing process with pid 61218 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:24.977 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61218' 01:21:24.978 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61218 01:21:24.978 [2024-12-09 05:16:16.513050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:24.978 05:16:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61218 01:21:25.237 [2024-12-09 05:16:16.634465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Dpr97KI61S 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:26.615 ************************************ 01:21:26.615 END TEST raid_read_error_test 01:21:26.615 ************************************ 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 01:21:26.615 01:21:26.615 real 0m4.742s 01:21:26.615 user 0m5.904s 01:21:26.615 sys 0m0.647s 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:26.615 05:16:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:26.615 05:16:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 01:21:26.615 05:16:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:26.615 05:16:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:26.615 05:16:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:26.615 ************************************ 01:21:26.615 START TEST raid_write_error_test 01:21:26.615 ************************************ 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OWsB8S0Njg 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61364 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61364 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61364 ']' 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:26.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:26.615 05:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:26.615 [2024-12-09 05:16:18.016508] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:26.615 [2024-12-09 05:16:18.016942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61364 ] 01:21:26.615 [2024-12-09 05:16:18.191316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:26.875 [2024-12-09 05:16:18.343491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:27.134 [2024-12-09 05:16:18.572728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:27.134 [2024-12-09 05:16:18.572788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.701 BaseBdev1_malloc 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.701 true 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.701 [2024-12-09 05:16:19.097929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:21:27.701 [2024-12-09 05:16:19.098300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:27.701 [2024-12-09 05:16:19.098354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:21:27.701 [2024-12-09 05:16:19.098385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:27.701 [2024-12-09 05:16:19.100979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:27.701 [2024-12-09 05:16:19.101025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:21:27.701 BaseBdev1 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.701 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.702 BaseBdev2_malloc 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.702 true 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.702 [2024-12-09 05:16:19.162001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:21:27.702 [2024-12-09 05:16:19.162097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:27.702 [2024-12-09 05:16:19.162123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:21:27.702 [2024-12-09 05:16:19.162140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:27.702 [2024-12-09 05:16:19.164807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:27.702 [2024-12-09 05:16:19.164850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:21:27.702 BaseBdev2 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.702 [2024-12-09 05:16:19.170104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:27.702 [2024-12-09 05:16:19.172568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:27.702 [2024-12-09 05:16:19.172803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:27.702 [2024-12-09 05:16:19.172828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:27.702 [2024-12-09 05:16:19.173100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:21:27.702 [2024-12-09 05:16:19.173313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:27.702 [2024-12-09 05:16:19.173334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:21:27.702 [2024-12-09 05:16:19.173599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:27.702 "name": "raid_bdev1", 01:21:27.702 "uuid": "8609a5b5-e8a9-4f4b-9791-1e5615c0f0a2", 01:21:27.702 "strip_size_kb": 64, 01:21:27.702 "state": "online", 01:21:27.702 "raid_level": "raid0", 01:21:27.702 "superblock": true, 01:21:27.702 "num_base_bdevs": 2, 01:21:27.702 "num_base_bdevs_discovered": 2, 01:21:27.702 "num_base_bdevs_operational": 2, 01:21:27.702 "base_bdevs_list": [ 01:21:27.702 { 01:21:27.702 "name": "BaseBdev1", 01:21:27.702 "uuid": "ecdae343-7722-5f38-a830-f1c84724ec4c", 01:21:27.702 "is_configured": true, 01:21:27.702 "data_offset": 2048, 01:21:27.702 "data_size": 63488 01:21:27.702 }, 01:21:27.702 { 01:21:27.702 "name": "BaseBdev2", 01:21:27.702 "uuid": "cc4d1a90-fb6c-5eb2-add5-b0b3ac4aa52f", 01:21:27.702 "is_configured": true, 01:21:27.702 "data_offset": 2048, 01:21:27.702 "data_size": 63488 01:21:27.702 } 01:21:27.702 ] 01:21:27.702 }' 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:27.702 05:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:28.270 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:21:28.270 05:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:21:28.270 [2024-12-09 05:16:19.835447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:29.205 "name": "raid_bdev1", 01:21:29.205 "uuid": "8609a5b5-e8a9-4f4b-9791-1e5615c0f0a2", 01:21:29.205 "strip_size_kb": 64, 01:21:29.205 "state": "online", 01:21:29.205 "raid_level": "raid0", 01:21:29.205 "superblock": true, 01:21:29.205 "num_base_bdevs": 2, 01:21:29.205 "num_base_bdevs_discovered": 2, 01:21:29.205 "num_base_bdevs_operational": 2, 01:21:29.205 "base_bdevs_list": [ 01:21:29.205 { 01:21:29.205 "name": "BaseBdev1", 01:21:29.205 "uuid": "ecdae343-7722-5f38-a830-f1c84724ec4c", 01:21:29.205 "is_configured": true, 01:21:29.205 "data_offset": 2048, 01:21:29.205 "data_size": 63488 01:21:29.205 }, 01:21:29.205 { 01:21:29.205 "name": "BaseBdev2", 01:21:29.205 "uuid": "cc4d1a90-fb6c-5eb2-add5-b0b3ac4aa52f", 01:21:29.205 "is_configured": true, 01:21:29.205 "data_offset": 2048, 01:21:29.205 "data_size": 63488 01:21:29.205 } 01:21:29.205 ] 01:21:29.205 }' 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:29.205 05:16:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:29.771 05:16:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:21:29.771 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:29.771 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:29.771 [2024-12-09 05:16:21.250048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:29.771 [2024-12-09 05:16:21.250465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:29.771 [2024-12-09 05:16:21.253592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:29.772 [2024-12-09 05:16:21.253684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:29.772 [2024-12-09 05:16:21.253887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:29.772 [2024-12-09 05:16:21.254147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:21:29.772 { 01:21:29.772 "results": [ 01:21:29.772 { 01:21:29.772 "job": "raid_bdev1", 01:21:29.772 "core_mask": "0x1", 01:21:29.772 "workload": "randrw", 01:21:29.772 "percentage": 50, 01:21:29.772 "status": "finished", 01:21:29.772 "queue_depth": 1, 01:21:29.772 "io_size": 131072, 01:21:29.772 "runtime": 1.412841, 01:21:29.772 "iops": 12139.37024760748, 01:21:29.772 "mibps": 1517.421280950935, 01:21:29.772 "io_failed": 1, 01:21:29.772 "io_timeout": 0, 01:21:29.772 "avg_latency_us": 115.44859565807326, 01:21:29.772 "min_latency_us": 35.14181818181818, 01:21:29.772 "max_latency_us": 1474.56 01:21:29.772 } 01:21:29.772 ], 01:21:29.772 "core_count": 1 01:21:29.772 } 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61364 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61364 ']' 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61364 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61364 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61364' 01:21:29.772 killing process with pid 61364 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61364 01:21:29.772 [2024-12-09 05:16:21.295618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:29.772 05:16:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61364 01:21:30.030 [2024-12-09 05:16:21.393286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OWsB8S0Njg 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:21:30.966 ************************************ 01:21:30.966 END TEST raid_write_error_test 01:21:30.966 ************************************ 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 01:21:30.966 01:21:30.966 real 0m4.560s 01:21:30.966 user 0m5.659s 01:21:30.966 sys 0m0.664s 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:30.966 05:16:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:30.966 05:16:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:21:30.966 05:16:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 01:21:30.966 05:16:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:30.966 05:16:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:30.966 05:16:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:30.966 ************************************ 01:21:30.966 START TEST raid_state_function_test 01:21:30.966 ************************************ 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:30.966 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61506 01:21:30.967 Process raid pid: 61506 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61506' 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61506 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61506 ']' 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:30.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:30.967 05:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:31.226 [2024-12-09 05:16:22.640167] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:31.226 [2024-12-09 05:16:22.640383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:31.226 [2024-12-09 05:16:22.824722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:31.486 [2024-12-09 05:16:22.951030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:31.753 [2024-12-09 05:16:23.226582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:31.753 [2024-12-09 05:16:23.226648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.020 [2024-12-09 05:16:23.605486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:32.020 [2024-12-09 05:16:23.605600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:32.020 [2024-12-09 05:16:23.605631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:32.020 [2024-12-09 05:16:23.605648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.020 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.279 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:32.279 "name": "Existed_Raid", 01:21:32.279 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:32.279 "strip_size_kb": 64, 01:21:32.279 "state": "configuring", 01:21:32.279 "raid_level": "concat", 01:21:32.279 "superblock": false, 01:21:32.279 "num_base_bdevs": 2, 01:21:32.279 "num_base_bdevs_discovered": 0, 01:21:32.279 "num_base_bdevs_operational": 2, 01:21:32.279 "base_bdevs_list": [ 01:21:32.279 { 01:21:32.279 "name": "BaseBdev1", 01:21:32.279 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:32.279 "is_configured": false, 01:21:32.279 "data_offset": 0, 01:21:32.279 "data_size": 0 01:21:32.279 }, 01:21:32.279 { 01:21:32.279 "name": "BaseBdev2", 01:21:32.279 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:32.279 "is_configured": false, 01:21:32.279 "data_offset": 0, 01:21:32.279 "data_size": 0 01:21:32.279 } 01:21:32.279 ] 01:21:32.279 }' 01:21:32.279 05:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:32.279 05:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.538 [2024-12-09 05:16:24.121602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:32.538 [2024-12-09 05:16:24.121649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.538 [2024-12-09 05:16:24.129564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:32.538 [2024-12-09 05:16:24.129622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:32.538 [2024-12-09 05:16:24.129639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:32.538 [2024-12-09 05:16:24.129658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.538 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.797 [2024-12-09 05:16:24.178735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:32.797 BaseBdev1 01:21:32.797 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.797 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:21:32.797 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:21:32.797 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:32.797 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.798 [ 01:21:32.798 { 01:21:32.798 "name": "BaseBdev1", 01:21:32.798 "aliases": [ 01:21:32.798 "f8e8e608-ee16-4fe0-8d4d-3e830b40bb97" 01:21:32.798 ], 01:21:32.798 "product_name": "Malloc disk", 01:21:32.798 "block_size": 512, 01:21:32.798 "num_blocks": 65536, 01:21:32.798 "uuid": "f8e8e608-ee16-4fe0-8d4d-3e830b40bb97", 01:21:32.798 "assigned_rate_limits": { 01:21:32.798 "rw_ios_per_sec": 0, 01:21:32.798 "rw_mbytes_per_sec": 0, 01:21:32.798 "r_mbytes_per_sec": 0, 01:21:32.798 "w_mbytes_per_sec": 0 01:21:32.798 }, 01:21:32.798 "claimed": true, 01:21:32.798 "claim_type": "exclusive_write", 01:21:32.798 "zoned": false, 01:21:32.798 "supported_io_types": { 01:21:32.798 "read": true, 01:21:32.798 "write": true, 01:21:32.798 "unmap": true, 01:21:32.798 "flush": true, 01:21:32.798 "reset": true, 01:21:32.798 "nvme_admin": false, 01:21:32.798 "nvme_io": false, 01:21:32.798 "nvme_io_md": false, 01:21:32.798 "write_zeroes": true, 01:21:32.798 "zcopy": true, 01:21:32.798 "get_zone_info": false, 01:21:32.798 "zone_management": false, 01:21:32.798 "zone_append": false, 01:21:32.798 "compare": false, 01:21:32.798 "compare_and_write": false, 01:21:32.798 "abort": true, 01:21:32.798 "seek_hole": false, 01:21:32.798 "seek_data": false, 01:21:32.798 "copy": true, 01:21:32.798 "nvme_iov_md": false 01:21:32.798 }, 01:21:32.798 "memory_domains": [ 01:21:32.798 { 01:21:32.798 "dma_device_id": "system", 01:21:32.798 "dma_device_type": 1 01:21:32.798 }, 01:21:32.798 { 01:21:32.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:32.798 "dma_device_type": 2 01:21:32.798 } 01:21:32.798 ], 01:21:32.798 "driver_specific": {} 01:21:32.798 } 01:21:32.798 ] 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:32.798 "name": "Existed_Raid", 01:21:32.798 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:32.798 "strip_size_kb": 64, 01:21:32.798 "state": "configuring", 01:21:32.798 "raid_level": "concat", 01:21:32.798 "superblock": false, 01:21:32.798 "num_base_bdevs": 2, 01:21:32.798 "num_base_bdevs_discovered": 1, 01:21:32.798 "num_base_bdevs_operational": 2, 01:21:32.798 "base_bdevs_list": [ 01:21:32.798 { 01:21:32.798 "name": "BaseBdev1", 01:21:32.798 "uuid": "f8e8e608-ee16-4fe0-8d4d-3e830b40bb97", 01:21:32.798 "is_configured": true, 01:21:32.798 "data_offset": 0, 01:21:32.798 "data_size": 65536 01:21:32.798 }, 01:21:32.798 { 01:21:32.798 "name": "BaseBdev2", 01:21:32.798 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:32.798 "is_configured": false, 01:21:32.798 "data_offset": 0, 01:21:32.798 "data_size": 0 01:21:32.798 } 01:21:32.798 ] 01:21:32.798 }' 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:32.798 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.365 [2024-12-09 05:16:24.743072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:33.365 [2024-12-09 05:16:24.743180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.365 [2024-12-09 05:16:24.751088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:33.365 [2024-12-09 05:16:24.754250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:33.365 [2024-12-09 05:16:24.754331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:33.365 "name": "Existed_Raid", 01:21:33.365 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:33.365 "strip_size_kb": 64, 01:21:33.365 "state": "configuring", 01:21:33.365 "raid_level": "concat", 01:21:33.365 "superblock": false, 01:21:33.365 "num_base_bdevs": 2, 01:21:33.365 "num_base_bdevs_discovered": 1, 01:21:33.365 "num_base_bdevs_operational": 2, 01:21:33.365 "base_bdevs_list": [ 01:21:33.365 { 01:21:33.365 "name": "BaseBdev1", 01:21:33.365 "uuid": "f8e8e608-ee16-4fe0-8d4d-3e830b40bb97", 01:21:33.365 "is_configured": true, 01:21:33.365 "data_offset": 0, 01:21:33.365 "data_size": 65536 01:21:33.365 }, 01:21:33.365 { 01:21:33.365 "name": "BaseBdev2", 01:21:33.365 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:33.365 "is_configured": false, 01:21:33.365 "data_offset": 0, 01:21:33.365 "data_size": 0 01:21:33.365 } 01:21:33.365 ] 01:21:33.365 }' 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:33.365 05:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.935 [2024-12-09 05:16:25.342633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:33.935 [2024-12-09 05:16:25.342693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:33.935 [2024-12-09 05:16:25.342706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:21:33.935 [2024-12-09 05:16:25.343094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:33.935 [2024-12-09 05:16:25.343353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:33.935 [2024-12-09 05:16:25.343387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:21:33.935 [2024-12-09 05:16:25.343705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:33.935 BaseBdev2 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.935 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.935 [ 01:21:33.935 { 01:21:33.935 "name": "BaseBdev2", 01:21:33.935 "aliases": [ 01:21:33.935 "aadaba37-d698-4363-aa2f-79f9295c5f43" 01:21:33.935 ], 01:21:33.935 "product_name": "Malloc disk", 01:21:33.935 "block_size": 512, 01:21:33.935 "num_blocks": 65536, 01:21:33.935 "uuid": "aadaba37-d698-4363-aa2f-79f9295c5f43", 01:21:33.935 "assigned_rate_limits": { 01:21:33.935 "rw_ios_per_sec": 0, 01:21:33.935 "rw_mbytes_per_sec": 0, 01:21:33.935 "r_mbytes_per_sec": 0, 01:21:33.935 "w_mbytes_per_sec": 0 01:21:33.935 }, 01:21:33.935 "claimed": true, 01:21:33.935 "claim_type": "exclusive_write", 01:21:33.935 "zoned": false, 01:21:33.935 "supported_io_types": { 01:21:33.935 "read": true, 01:21:33.935 "write": true, 01:21:33.935 "unmap": true, 01:21:33.935 "flush": true, 01:21:33.935 "reset": true, 01:21:33.935 "nvme_admin": false, 01:21:33.936 "nvme_io": false, 01:21:33.936 "nvme_io_md": false, 01:21:33.936 "write_zeroes": true, 01:21:33.936 "zcopy": true, 01:21:33.936 "get_zone_info": false, 01:21:33.936 "zone_management": false, 01:21:33.936 "zone_append": false, 01:21:33.936 "compare": false, 01:21:33.936 "compare_and_write": false, 01:21:33.936 "abort": true, 01:21:33.936 "seek_hole": false, 01:21:33.936 "seek_data": false, 01:21:33.936 "copy": true, 01:21:33.936 "nvme_iov_md": false 01:21:33.936 }, 01:21:33.936 "memory_domains": [ 01:21:33.936 { 01:21:33.936 "dma_device_id": "system", 01:21:33.936 "dma_device_type": 1 01:21:33.936 }, 01:21:33.936 { 01:21:33.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:33.936 "dma_device_type": 2 01:21:33.936 } 01:21:33.936 ], 01:21:33.936 "driver_specific": {} 01:21:33.936 } 01:21:33.936 ] 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:33.936 "name": "Existed_Raid", 01:21:33.936 "uuid": "099e8cb9-c43f-46ff-8322-69b29cae86a4", 01:21:33.936 "strip_size_kb": 64, 01:21:33.936 "state": "online", 01:21:33.936 "raid_level": "concat", 01:21:33.936 "superblock": false, 01:21:33.936 "num_base_bdevs": 2, 01:21:33.936 "num_base_bdevs_discovered": 2, 01:21:33.936 "num_base_bdevs_operational": 2, 01:21:33.936 "base_bdevs_list": [ 01:21:33.936 { 01:21:33.936 "name": "BaseBdev1", 01:21:33.936 "uuid": "f8e8e608-ee16-4fe0-8d4d-3e830b40bb97", 01:21:33.936 "is_configured": true, 01:21:33.936 "data_offset": 0, 01:21:33.936 "data_size": 65536 01:21:33.936 }, 01:21:33.936 { 01:21:33.936 "name": "BaseBdev2", 01:21:33.936 "uuid": "aadaba37-d698-4363-aa2f-79f9295c5f43", 01:21:33.936 "is_configured": true, 01:21:33.936 "data_offset": 0, 01:21:33.936 "data_size": 65536 01:21:33.936 } 01:21:33.936 ] 01:21:33.936 }' 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:33.936 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:34.502 [2024-12-09 05:16:25.907412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.502 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:34.502 "name": "Existed_Raid", 01:21:34.502 "aliases": [ 01:21:34.502 "099e8cb9-c43f-46ff-8322-69b29cae86a4" 01:21:34.502 ], 01:21:34.502 "product_name": "Raid Volume", 01:21:34.502 "block_size": 512, 01:21:34.502 "num_blocks": 131072, 01:21:34.502 "uuid": "099e8cb9-c43f-46ff-8322-69b29cae86a4", 01:21:34.502 "assigned_rate_limits": { 01:21:34.502 "rw_ios_per_sec": 0, 01:21:34.502 "rw_mbytes_per_sec": 0, 01:21:34.502 "r_mbytes_per_sec": 0, 01:21:34.502 "w_mbytes_per_sec": 0 01:21:34.502 }, 01:21:34.502 "claimed": false, 01:21:34.502 "zoned": false, 01:21:34.502 "supported_io_types": { 01:21:34.502 "read": true, 01:21:34.502 "write": true, 01:21:34.502 "unmap": true, 01:21:34.502 "flush": true, 01:21:34.502 "reset": true, 01:21:34.502 "nvme_admin": false, 01:21:34.502 "nvme_io": false, 01:21:34.502 "nvme_io_md": false, 01:21:34.502 "write_zeroes": true, 01:21:34.502 "zcopy": false, 01:21:34.502 "get_zone_info": false, 01:21:34.502 "zone_management": false, 01:21:34.502 "zone_append": false, 01:21:34.502 "compare": false, 01:21:34.502 "compare_and_write": false, 01:21:34.502 "abort": false, 01:21:34.502 "seek_hole": false, 01:21:34.502 "seek_data": false, 01:21:34.502 "copy": false, 01:21:34.502 "nvme_iov_md": false 01:21:34.502 }, 01:21:34.502 "memory_domains": [ 01:21:34.502 { 01:21:34.502 "dma_device_id": "system", 01:21:34.502 "dma_device_type": 1 01:21:34.502 }, 01:21:34.502 { 01:21:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:34.502 "dma_device_type": 2 01:21:34.502 }, 01:21:34.502 { 01:21:34.502 "dma_device_id": "system", 01:21:34.502 "dma_device_type": 1 01:21:34.502 }, 01:21:34.502 { 01:21:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:34.502 "dma_device_type": 2 01:21:34.502 } 01:21:34.502 ], 01:21:34.502 "driver_specific": { 01:21:34.502 "raid": { 01:21:34.502 "uuid": "099e8cb9-c43f-46ff-8322-69b29cae86a4", 01:21:34.502 "strip_size_kb": 64, 01:21:34.502 "state": "online", 01:21:34.502 "raid_level": "concat", 01:21:34.502 "superblock": false, 01:21:34.502 "num_base_bdevs": 2, 01:21:34.502 "num_base_bdevs_discovered": 2, 01:21:34.502 "num_base_bdevs_operational": 2, 01:21:34.502 "base_bdevs_list": [ 01:21:34.502 { 01:21:34.502 "name": "BaseBdev1", 01:21:34.502 "uuid": "f8e8e608-ee16-4fe0-8d4d-3e830b40bb97", 01:21:34.502 "is_configured": true, 01:21:34.502 "data_offset": 0, 01:21:34.502 "data_size": 65536 01:21:34.502 }, 01:21:34.502 { 01:21:34.503 "name": "BaseBdev2", 01:21:34.503 "uuid": "aadaba37-d698-4363-aa2f-79f9295c5f43", 01:21:34.503 "is_configured": true, 01:21:34.503 "data_offset": 0, 01:21:34.503 "data_size": 65536 01:21:34.503 } 01:21:34.503 ] 01:21:34.503 } 01:21:34.503 } 01:21:34.503 }' 01:21:34.503 05:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:21:34.503 BaseBdev2' 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:34.503 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:34.761 [2024-12-09 05:16:26.182933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:21:34.761 [2024-12-09 05:16:26.182968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:34.761 [2024-12-09 05:16:26.183034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.761 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:34.761 "name": "Existed_Raid", 01:21:34.761 "uuid": "099e8cb9-c43f-46ff-8322-69b29cae86a4", 01:21:34.761 "strip_size_kb": 64, 01:21:34.761 "state": "offline", 01:21:34.761 "raid_level": "concat", 01:21:34.761 "superblock": false, 01:21:34.761 "num_base_bdevs": 2, 01:21:34.762 "num_base_bdevs_discovered": 1, 01:21:34.762 "num_base_bdevs_operational": 1, 01:21:34.762 "base_bdevs_list": [ 01:21:34.762 { 01:21:34.762 "name": null, 01:21:34.762 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:34.762 "is_configured": false, 01:21:34.762 "data_offset": 0, 01:21:34.762 "data_size": 65536 01:21:34.762 }, 01:21:34.762 { 01:21:34.762 "name": "BaseBdev2", 01:21:34.762 "uuid": "aadaba37-d698-4363-aa2f-79f9295c5f43", 01:21:34.762 "is_configured": true, 01:21:34.762 "data_offset": 0, 01:21:34.762 "data_size": 65536 01:21:34.762 } 01:21:34.762 ] 01:21:34.762 }' 01:21:34.762 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:34.762 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:35.330 [2024-12-09 05:16:26.819702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:21:35.330 [2024-12-09 05:16:26.820044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:35.330 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61506 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61506 ']' 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61506 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61506 01:21:35.589 killing process with pid 61506 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61506' 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61506 01:21:35.589 [2024-12-09 05:16:26.982262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:35.589 05:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61506 01:21:35.589 [2024-12-09 05:16:26.995486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:36.525 05:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:21:36.525 01:21:36.525 real 0m5.572s 01:21:36.525 user 0m8.379s 01:21:36.525 sys 0m0.838s 01:21:36.525 05:16:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:36.525 05:16:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:36.525 ************************************ 01:21:36.525 END TEST raid_state_function_test 01:21:36.525 ************************************ 01:21:36.525 05:16:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 01:21:36.525 05:16:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:36.525 05:16:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:36.525 05:16:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:36.783 ************************************ 01:21:36.783 START TEST raid_state_function_test_sb 01:21:36.783 ************************************ 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 01:21:36.783 Process raid pid: 61766 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61766 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61766' 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61766 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61766 ']' 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:36.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:36.783 05:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:36.783 [2024-12-09 05:16:28.236264] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:36.783 [2024-12-09 05:16:28.236622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:37.041 [2024-12-09 05:16:28.409176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:37.041 [2024-12-09 05:16:28.540551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:37.299 [2024-12-09 05:16:28.756922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:37.299 [2024-12-09 05:16:28.757243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:37.863 [2024-12-09 05:16:29.321647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:37.863 [2024-12-09 05:16:29.321769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:37.863 [2024-12-09 05:16:29.321788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:37.863 [2024-12-09 05:16:29.321805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.863 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:37.863 "name": "Existed_Raid", 01:21:37.863 "uuid": "a70c3fb4-130d-4e2f-b1ae-bb6b1eaa32e5", 01:21:37.863 "strip_size_kb": 64, 01:21:37.863 "state": "configuring", 01:21:37.863 "raid_level": "concat", 01:21:37.863 "superblock": true, 01:21:37.863 "num_base_bdevs": 2, 01:21:37.863 "num_base_bdevs_discovered": 0, 01:21:37.864 "num_base_bdevs_operational": 2, 01:21:37.864 "base_bdevs_list": [ 01:21:37.864 { 01:21:37.864 "name": "BaseBdev1", 01:21:37.864 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:37.864 "is_configured": false, 01:21:37.864 "data_offset": 0, 01:21:37.864 "data_size": 0 01:21:37.864 }, 01:21:37.864 { 01:21:37.864 "name": "BaseBdev2", 01:21:37.864 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:37.864 "is_configured": false, 01:21:37.864 "data_offset": 0, 01:21:37.864 "data_size": 0 01:21:37.864 } 01:21:37.864 ] 01:21:37.864 }' 01:21:37.864 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:37.864 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.430 [2024-12-09 05:16:29.861722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:38.430 [2024-12-09 05:16:29.861778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.430 [2024-12-09 05:16:29.869725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:38.430 [2024-12-09 05:16:29.869773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:38.430 [2024-12-09 05:16:29.869789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:38.430 [2024-12-09 05:16:29.869823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.430 [2024-12-09 05:16:29.918913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:38.430 BaseBdev1 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.430 [ 01:21:38.430 { 01:21:38.430 "name": "BaseBdev1", 01:21:38.430 "aliases": [ 01:21:38.430 "369aac52-9003-4049-8dea-36028cec3d82" 01:21:38.430 ], 01:21:38.430 "product_name": "Malloc disk", 01:21:38.430 "block_size": 512, 01:21:38.430 "num_blocks": 65536, 01:21:38.430 "uuid": "369aac52-9003-4049-8dea-36028cec3d82", 01:21:38.430 "assigned_rate_limits": { 01:21:38.430 "rw_ios_per_sec": 0, 01:21:38.430 "rw_mbytes_per_sec": 0, 01:21:38.430 "r_mbytes_per_sec": 0, 01:21:38.430 "w_mbytes_per_sec": 0 01:21:38.430 }, 01:21:38.430 "claimed": true, 01:21:38.430 "claim_type": "exclusive_write", 01:21:38.430 "zoned": false, 01:21:38.430 "supported_io_types": { 01:21:38.430 "read": true, 01:21:38.430 "write": true, 01:21:38.430 "unmap": true, 01:21:38.430 "flush": true, 01:21:38.430 "reset": true, 01:21:38.430 "nvme_admin": false, 01:21:38.430 "nvme_io": false, 01:21:38.430 "nvme_io_md": false, 01:21:38.430 "write_zeroes": true, 01:21:38.430 "zcopy": true, 01:21:38.430 "get_zone_info": false, 01:21:38.430 "zone_management": false, 01:21:38.430 "zone_append": false, 01:21:38.430 "compare": false, 01:21:38.430 "compare_and_write": false, 01:21:38.430 "abort": true, 01:21:38.430 "seek_hole": false, 01:21:38.430 "seek_data": false, 01:21:38.430 "copy": true, 01:21:38.430 "nvme_iov_md": false 01:21:38.430 }, 01:21:38.430 "memory_domains": [ 01:21:38.430 { 01:21:38.430 "dma_device_id": "system", 01:21:38.430 "dma_device_type": 1 01:21:38.430 }, 01:21:38.430 { 01:21:38.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:38.430 "dma_device_type": 2 01:21:38.430 } 01:21:38.430 ], 01:21:38.430 "driver_specific": {} 01:21:38.430 } 01:21:38.430 ] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:38.430 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.431 05:16:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.431 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:38.431 "name": "Existed_Raid", 01:21:38.431 "uuid": "aa9662ea-d6be-4e57-bf25-4b18d7820f64", 01:21:38.431 "strip_size_kb": 64, 01:21:38.431 "state": "configuring", 01:21:38.431 "raid_level": "concat", 01:21:38.431 "superblock": true, 01:21:38.431 "num_base_bdevs": 2, 01:21:38.431 "num_base_bdevs_discovered": 1, 01:21:38.431 "num_base_bdevs_operational": 2, 01:21:38.431 "base_bdevs_list": [ 01:21:38.431 { 01:21:38.431 "name": "BaseBdev1", 01:21:38.431 "uuid": "369aac52-9003-4049-8dea-36028cec3d82", 01:21:38.431 "is_configured": true, 01:21:38.431 "data_offset": 2048, 01:21:38.431 "data_size": 63488 01:21:38.431 }, 01:21:38.431 { 01:21:38.431 "name": "BaseBdev2", 01:21:38.431 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:38.431 "is_configured": false, 01:21:38.431 "data_offset": 0, 01:21:38.431 "data_size": 0 01:21:38.431 } 01:21:38.431 ] 01:21:38.431 }' 01:21:38.431 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:38.431 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.996 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:38.996 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.996 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.996 [2024-12-09 05:16:30.499143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:38.996 [2024-12-09 05:16:30.499219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:21:38.996 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.996 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.997 [2024-12-09 05:16:30.507128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:38.997 [2024-12-09 05:16:30.509425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:38.997 [2024-12-09 05:16:30.509476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:38.997 "name": "Existed_Raid", 01:21:38.997 "uuid": "3dfe5977-0e0a-4420-80e4-fc598f8b37b8", 01:21:38.997 "strip_size_kb": 64, 01:21:38.997 "state": "configuring", 01:21:38.997 "raid_level": "concat", 01:21:38.997 "superblock": true, 01:21:38.997 "num_base_bdevs": 2, 01:21:38.997 "num_base_bdevs_discovered": 1, 01:21:38.997 "num_base_bdevs_operational": 2, 01:21:38.997 "base_bdevs_list": [ 01:21:38.997 { 01:21:38.997 "name": "BaseBdev1", 01:21:38.997 "uuid": "369aac52-9003-4049-8dea-36028cec3d82", 01:21:38.997 "is_configured": true, 01:21:38.997 "data_offset": 2048, 01:21:38.997 "data_size": 63488 01:21:38.997 }, 01:21:38.997 { 01:21:38.997 "name": "BaseBdev2", 01:21:38.997 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:38.997 "is_configured": false, 01:21:38.997 "data_offset": 0, 01:21:38.997 "data_size": 0 01:21:38.997 } 01:21:38.997 ] 01:21:38.997 }' 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:38.997 05:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:39.562 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:21:39.562 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:39.562 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:39.562 [2024-12-09 05:16:31.085617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:39.562 [2024-12-09 05:16:31.085936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:39.562 [2024-12-09 05:16:31.085954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:39.562 BaseBdev2 01:21:39.562 [2024-12-09 05:16:31.086273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:39.562 [2024-12-09 05:16:31.086495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:39.562 [2024-12-09 05:16:31.086559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:21:39.563 [2024-12-09 05:16:31.086780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:39.563 [ 01:21:39.563 { 01:21:39.563 "name": "BaseBdev2", 01:21:39.563 "aliases": [ 01:21:39.563 "0a7d3fe2-0eda-4b60-b6d4-753e9e6275d3" 01:21:39.563 ], 01:21:39.563 "product_name": "Malloc disk", 01:21:39.563 "block_size": 512, 01:21:39.563 "num_blocks": 65536, 01:21:39.563 "uuid": "0a7d3fe2-0eda-4b60-b6d4-753e9e6275d3", 01:21:39.563 "assigned_rate_limits": { 01:21:39.563 "rw_ios_per_sec": 0, 01:21:39.563 "rw_mbytes_per_sec": 0, 01:21:39.563 "r_mbytes_per_sec": 0, 01:21:39.563 "w_mbytes_per_sec": 0 01:21:39.563 }, 01:21:39.563 "claimed": true, 01:21:39.563 "claim_type": "exclusive_write", 01:21:39.563 "zoned": false, 01:21:39.563 "supported_io_types": { 01:21:39.563 "read": true, 01:21:39.563 "write": true, 01:21:39.563 "unmap": true, 01:21:39.563 "flush": true, 01:21:39.563 "reset": true, 01:21:39.563 "nvme_admin": false, 01:21:39.563 "nvme_io": false, 01:21:39.563 "nvme_io_md": false, 01:21:39.563 "write_zeroes": true, 01:21:39.563 "zcopy": true, 01:21:39.563 "get_zone_info": false, 01:21:39.563 "zone_management": false, 01:21:39.563 "zone_append": false, 01:21:39.563 "compare": false, 01:21:39.563 "compare_and_write": false, 01:21:39.563 "abort": true, 01:21:39.563 "seek_hole": false, 01:21:39.563 "seek_data": false, 01:21:39.563 "copy": true, 01:21:39.563 "nvme_iov_md": false 01:21:39.563 }, 01:21:39.563 "memory_domains": [ 01:21:39.563 { 01:21:39.563 "dma_device_id": "system", 01:21:39.563 "dma_device_type": 1 01:21:39.563 }, 01:21:39.563 { 01:21:39.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:39.563 "dma_device_type": 2 01:21:39.563 } 01:21:39.563 ], 01:21:39.563 "driver_specific": {} 01:21:39.563 } 01:21:39.563 ] 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:39.563 "name": "Existed_Raid", 01:21:39.563 "uuid": "3dfe5977-0e0a-4420-80e4-fc598f8b37b8", 01:21:39.563 "strip_size_kb": 64, 01:21:39.563 "state": "online", 01:21:39.563 "raid_level": "concat", 01:21:39.563 "superblock": true, 01:21:39.563 "num_base_bdevs": 2, 01:21:39.563 "num_base_bdevs_discovered": 2, 01:21:39.563 "num_base_bdevs_operational": 2, 01:21:39.563 "base_bdevs_list": [ 01:21:39.563 { 01:21:39.563 "name": "BaseBdev1", 01:21:39.563 "uuid": "369aac52-9003-4049-8dea-36028cec3d82", 01:21:39.563 "is_configured": true, 01:21:39.563 "data_offset": 2048, 01:21:39.563 "data_size": 63488 01:21:39.563 }, 01:21:39.563 { 01:21:39.563 "name": "BaseBdev2", 01:21:39.563 "uuid": "0a7d3fe2-0eda-4b60-b6d4-753e9e6275d3", 01:21:39.563 "is_configured": true, 01:21:39.563 "data_offset": 2048, 01:21:39.563 "data_size": 63488 01:21:39.563 } 01:21:39.563 ] 01:21:39.563 }' 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:39.563 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:40.129 [2024-12-09 05:16:31.650060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:40.129 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.130 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:40.130 "name": "Existed_Raid", 01:21:40.130 "aliases": [ 01:21:40.130 "3dfe5977-0e0a-4420-80e4-fc598f8b37b8" 01:21:40.130 ], 01:21:40.130 "product_name": "Raid Volume", 01:21:40.130 "block_size": 512, 01:21:40.130 "num_blocks": 126976, 01:21:40.130 "uuid": "3dfe5977-0e0a-4420-80e4-fc598f8b37b8", 01:21:40.130 "assigned_rate_limits": { 01:21:40.130 "rw_ios_per_sec": 0, 01:21:40.130 "rw_mbytes_per_sec": 0, 01:21:40.130 "r_mbytes_per_sec": 0, 01:21:40.130 "w_mbytes_per_sec": 0 01:21:40.130 }, 01:21:40.130 "claimed": false, 01:21:40.130 "zoned": false, 01:21:40.130 "supported_io_types": { 01:21:40.130 "read": true, 01:21:40.130 "write": true, 01:21:40.130 "unmap": true, 01:21:40.130 "flush": true, 01:21:40.130 "reset": true, 01:21:40.130 "nvme_admin": false, 01:21:40.130 "nvme_io": false, 01:21:40.130 "nvme_io_md": false, 01:21:40.130 "write_zeroes": true, 01:21:40.130 "zcopy": false, 01:21:40.130 "get_zone_info": false, 01:21:40.130 "zone_management": false, 01:21:40.130 "zone_append": false, 01:21:40.130 "compare": false, 01:21:40.130 "compare_and_write": false, 01:21:40.130 "abort": false, 01:21:40.130 "seek_hole": false, 01:21:40.130 "seek_data": false, 01:21:40.130 "copy": false, 01:21:40.130 "nvme_iov_md": false 01:21:40.130 }, 01:21:40.130 "memory_domains": [ 01:21:40.130 { 01:21:40.130 "dma_device_id": "system", 01:21:40.130 "dma_device_type": 1 01:21:40.130 }, 01:21:40.130 { 01:21:40.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:40.130 "dma_device_type": 2 01:21:40.130 }, 01:21:40.130 { 01:21:40.130 "dma_device_id": "system", 01:21:40.130 "dma_device_type": 1 01:21:40.130 }, 01:21:40.130 { 01:21:40.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:40.130 "dma_device_type": 2 01:21:40.130 } 01:21:40.130 ], 01:21:40.130 "driver_specific": { 01:21:40.130 "raid": { 01:21:40.130 "uuid": "3dfe5977-0e0a-4420-80e4-fc598f8b37b8", 01:21:40.130 "strip_size_kb": 64, 01:21:40.130 "state": "online", 01:21:40.130 "raid_level": "concat", 01:21:40.130 "superblock": true, 01:21:40.130 "num_base_bdevs": 2, 01:21:40.130 "num_base_bdevs_discovered": 2, 01:21:40.130 "num_base_bdevs_operational": 2, 01:21:40.130 "base_bdevs_list": [ 01:21:40.130 { 01:21:40.130 "name": "BaseBdev1", 01:21:40.130 "uuid": "369aac52-9003-4049-8dea-36028cec3d82", 01:21:40.130 "is_configured": true, 01:21:40.130 "data_offset": 2048, 01:21:40.130 "data_size": 63488 01:21:40.130 }, 01:21:40.130 { 01:21:40.130 "name": "BaseBdev2", 01:21:40.130 "uuid": "0a7d3fe2-0eda-4b60-b6d4-753e9e6275d3", 01:21:40.130 "is_configured": true, 01:21:40.130 "data_offset": 2048, 01:21:40.130 "data_size": 63488 01:21:40.130 } 01:21:40.130 ] 01:21:40.130 } 01:21:40.130 } 01:21:40.130 }' 01:21:40.130 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:40.388 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:21:40.388 BaseBdev2' 01:21:40.388 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.389 [2024-12-09 05:16:31.905961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:21:40.389 [2024-12-09 05:16:31.906120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:40.389 [2024-12-09 05:16:31.906212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.389 05:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.647 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.647 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:40.647 "name": "Existed_Raid", 01:21:40.647 "uuid": "3dfe5977-0e0a-4420-80e4-fc598f8b37b8", 01:21:40.647 "strip_size_kb": 64, 01:21:40.647 "state": "offline", 01:21:40.647 "raid_level": "concat", 01:21:40.647 "superblock": true, 01:21:40.647 "num_base_bdevs": 2, 01:21:40.647 "num_base_bdevs_discovered": 1, 01:21:40.647 "num_base_bdevs_operational": 1, 01:21:40.647 "base_bdevs_list": [ 01:21:40.647 { 01:21:40.647 "name": null, 01:21:40.647 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:40.647 "is_configured": false, 01:21:40.647 "data_offset": 0, 01:21:40.647 "data_size": 63488 01:21:40.647 }, 01:21:40.647 { 01:21:40.647 "name": "BaseBdev2", 01:21:40.647 "uuid": "0a7d3fe2-0eda-4b60-b6d4-753e9e6275d3", 01:21:40.647 "is_configured": true, 01:21:40.647 "data_offset": 2048, 01:21:40.647 "data_size": 63488 01:21:40.647 } 01:21:40.647 ] 01:21:40.647 }' 01:21:40.647 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:40.647 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:40.905 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:21:40.905 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:41.163 [2024-12-09 05:16:32.582724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:21:41.163 [2024-12-09 05:16:32.582795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:41.163 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61766 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61766 ']' 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61766 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61766 01:21:41.164 killing process with pid 61766 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61766' 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61766 01:21:41.164 [2024-12-09 05:16:32.758907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:41.164 05:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61766 01:21:41.164 [2024-12-09 05:16:32.772656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:42.540 05:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:21:42.540 01:21:42.540 real 0m5.768s 01:21:42.540 user 0m8.672s 01:21:42.540 sys 0m0.870s 01:21:42.540 05:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:42.540 05:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:21:42.540 ************************************ 01:21:42.540 END TEST raid_state_function_test_sb 01:21:42.540 ************************************ 01:21:42.540 05:16:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 01:21:42.541 05:16:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:21:42.541 05:16:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:42.541 05:16:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:42.541 ************************************ 01:21:42.541 START TEST raid_superblock_test 01:21:42.541 ************************************ 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62023 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62023 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62023 ']' 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:42.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:42.541 05:16:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:42.541 [2024-12-09 05:16:34.087922] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:42.541 [2024-12-09 05:16:34.088141] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 01:21:42.800 [2024-12-09 05:16:34.282488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:43.057 [2024-12-09 05:16:34.453908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:43.314 [2024-12-09 05:16:34.756142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:43.314 [2024-12-09 05:16:34.756210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:43.572 malloc1 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:43.572 [2024-12-09 05:16:35.089226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:21:43.572 [2024-12-09 05:16:35.089325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:43.572 [2024-12-09 05:16:35.089382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:21:43.572 [2024-12-09 05:16:35.089406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:43.572 [2024-12-09 05:16:35.092502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:43.572 [2024-12-09 05:16:35.092556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:21:43.572 pt1 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:43.572 malloc2 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.572 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:43.572 [2024-12-09 05:16:35.142320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:21:43.572 [2024-12-09 05:16:35.142419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:43.572 [2024-12-09 05:16:35.142464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:21:43.572 [2024-12-09 05:16:35.142483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:43.573 [2024-12-09 05:16:35.145349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:43.573 [2024-12-09 05:16:35.145415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:21:43.573 pt2 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:43.573 [2024-12-09 05:16:35.150468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:21:43.573 [2024-12-09 05:16:35.152975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:21:43.573 [2024-12-09 05:16:35.153227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:21:43.573 [2024-12-09 05:16:35.153250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:43.573 [2024-12-09 05:16:35.153650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:43.573 [2024-12-09 05:16:35.153902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:21:43.573 [2024-12-09 05:16:35.153935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:21:43.573 [2024-12-09 05:16:35.154173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:43.573 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.851 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:43.851 "name": "raid_bdev1", 01:21:43.851 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:43.851 "strip_size_kb": 64, 01:21:43.851 "state": "online", 01:21:43.851 "raid_level": "concat", 01:21:43.851 "superblock": true, 01:21:43.851 "num_base_bdevs": 2, 01:21:43.851 "num_base_bdevs_discovered": 2, 01:21:43.851 "num_base_bdevs_operational": 2, 01:21:43.851 "base_bdevs_list": [ 01:21:43.851 { 01:21:43.851 "name": "pt1", 01:21:43.851 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:43.851 "is_configured": true, 01:21:43.851 "data_offset": 2048, 01:21:43.851 "data_size": 63488 01:21:43.851 }, 01:21:43.851 { 01:21:43.851 "name": "pt2", 01:21:43.851 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:43.851 "is_configured": true, 01:21:43.851 "data_offset": 2048, 01:21:43.851 "data_size": 63488 01:21:43.851 } 01:21:43.851 ] 01:21:43.851 }' 01:21:43.851 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:43.851 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.125 [2024-12-09 05:16:35.626888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:44.125 "name": "raid_bdev1", 01:21:44.125 "aliases": [ 01:21:44.125 "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c" 01:21:44.125 ], 01:21:44.125 "product_name": "Raid Volume", 01:21:44.125 "block_size": 512, 01:21:44.125 "num_blocks": 126976, 01:21:44.125 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:44.125 "assigned_rate_limits": { 01:21:44.125 "rw_ios_per_sec": 0, 01:21:44.125 "rw_mbytes_per_sec": 0, 01:21:44.125 "r_mbytes_per_sec": 0, 01:21:44.125 "w_mbytes_per_sec": 0 01:21:44.125 }, 01:21:44.125 "claimed": false, 01:21:44.125 "zoned": false, 01:21:44.125 "supported_io_types": { 01:21:44.125 "read": true, 01:21:44.125 "write": true, 01:21:44.125 "unmap": true, 01:21:44.125 "flush": true, 01:21:44.125 "reset": true, 01:21:44.125 "nvme_admin": false, 01:21:44.125 "nvme_io": false, 01:21:44.125 "nvme_io_md": false, 01:21:44.125 "write_zeroes": true, 01:21:44.125 "zcopy": false, 01:21:44.125 "get_zone_info": false, 01:21:44.125 "zone_management": false, 01:21:44.125 "zone_append": false, 01:21:44.125 "compare": false, 01:21:44.125 "compare_and_write": false, 01:21:44.125 "abort": false, 01:21:44.125 "seek_hole": false, 01:21:44.125 "seek_data": false, 01:21:44.125 "copy": false, 01:21:44.125 "nvme_iov_md": false 01:21:44.125 }, 01:21:44.125 "memory_domains": [ 01:21:44.125 { 01:21:44.125 "dma_device_id": "system", 01:21:44.125 "dma_device_type": 1 01:21:44.125 }, 01:21:44.125 { 01:21:44.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:44.125 "dma_device_type": 2 01:21:44.125 }, 01:21:44.125 { 01:21:44.125 "dma_device_id": "system", 01:21:44.125 "dma_device_type": 1 01:21:44.125 }, 01:21:44.125 { 01:21:44.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:44.125 "dma_device_type": 2 01:21:44.125 } 01:21:44.125 ], 01:21:44.125 "driver_specific": { 01:21:44.125 "raid": { 01:21:44.125 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:44.125 "strip_size_kb": 64, 01:21:44.125 "state": "online", 01:21:44.125 "raid_level": "concat", 01:21:44.125 "superblock": true, 01:21:44.125 "num_base_bdevs": 2, 01:21:44.125 "num_base_bdevs_discovered": 2, 01:21:44.125 "num_base_bdevs_operational": 2, 01:21:44.125 "base_bdevs_list": [ 01:21:44.125 { 01:21:44.125 "name": "pt1", 01:21:44.125 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:44.125 "is_configured": true, 01:21:44.125 "data_offset": 2048, 01:21:44.125 "data_size": 63488 01:21:44.125 }, 01:21:44.125 { 01:21:44.125 "name": "pt2", 01:21:44.125 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:44.125 "is_configured": true, 01:21:44.125 "data_offset": 2048, 01:21:44.125 "data_size": 63488 01:21:44.125 } 01:21:44.125 ] 01:21:44.125 } 01:21:44.125 } 01:21:44.125 }' 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:21:44.125 pt2' 01:21:44.125 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:21:44.384 [2024-12-09 05:16:35.874996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c ']' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.384 [2024-12-09 05:16:35.922588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:44.384 [2024-12-09 05:16:35.922647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:44.384 [2024-12-09 05:16:35.922802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:44.384 [2024-12-09 05:16:35.922912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:44.384 [2024-12-09 05:16:35.922937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.384 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.385 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.385 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:21:44.385 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.385 05:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:21:44.385 05:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.643 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.643 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:21:44.643 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:21:44.643 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:21:44.643 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.644 [2024-12-09 05:16:36.058672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:21:44.644 [2024-12-09 05:16:36.061283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:21:44.644 [2024-12-09 05:16:36.061410] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:21:44.644 [2024-12-09 05:16:36.061501] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:21:44.644 [2024-12-09 05:16:36.061549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:44.644 [2024-12-09 05:16:36.061577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:21:44.644 request: 01:21:44.644 { 01:21:44.644 "name": "raid_bdev1", 01:21:44.644 "raid_level": "concat", 01:21:44.644 "base_bdevs": [ 01:21:44.644 "malloc1", 01:21:44.644 "malloc2" 01:21:44.644 ], 01:21:44.644 "strip_size_kb": 64, 01:21:44.644 "superblock": false, 01:21:44.644 "method": "bdev_raid_create", 01:21:44.644 "req_id": 1 01:21:44.644 } 01:21:44.644 Got JSON-RPC error response 01:21:44.644 response: 01:21:44.644 { 01:21:44.644 "code": -17, 01:21:44.644 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:21:44.644 } 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.644 [2024-12-09 05:16:36.122670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:21:44.644 [2024-12-09 05:16:36.122784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:44.644 [2024-12-09 05:16:36.122832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:21:44.644 [2024-12-09 05:16:36.122854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:44.644 [2024-12-09 05:16:36.125928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:44.644 [2024-12-09 05:16:36.125985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:21:44.644 [2024-12-09 05:16:36.126110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:21:44.644 [2024-12-09 05:16:36.126196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:21:44.644 pt1 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:44.644 "name": "raid_bdev1", 01:21:44.644 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:44.644 "strip_size_kb": 64, 01:21:44.644 "state": "configuring", 01:21:44.644 "raid_level": "concat", 01:21:44.644 "superblock": true, 01:21:44.644 "num_base_bdevs": 2, 01:21:44.644 "num_base_bdevs_discovered": 1, 01:21:44.644 "num_base_bdevs_operational": 2, 01:21:44.644 "base_bdevs_list": [ 01:21:44.644 { 01:21:44.644 "name": "pt1", 01:21:44.644 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:44.644 "is_configured": true, 01:21:44.644 "data_offset": 2048, 01:21:44.644 "data_size": 63488 01:21:44.644 }, 01:21:44.644 { 01:21:44.644 "name": null, 01:21:44.644 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:44.644 "is_configured": false, 01:21:44.644 "data_offset": 2048, 01:21:44.644 "data_size": 63488 01:21:44.644 } 01:21:44.644 ] 01:21:44.644 }' 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:44.644 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:45.211 [2024-12-09 05:16:36.650877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:21:45.211 [2024-12-09 05:16:36.650985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:45.211 [2024-12-09 05:16:36.651032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:21:45.211 [2024-12-09 05:16:36.651053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:45.211 [2024-12-09 05:16:36.651705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:45.211 [2024-12-09 05:16:36.651755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:21:45.211 [2024-12-09 05:16:36.651904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:21:45.211 [2024-12-09 05:16:36.651953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:21:45.211 [2024-12-09 05:16:36.652114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:45.211 [2024-12-09 05:16:36.652140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:45.211 [2024-12-09 05:16:36.652491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:21:45.211 [2024-12-09 05:16:36.652693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:45.211 [2024-12-09 05:16:36.652711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:21:45.211 [2024-12-09 05:16:36.652928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:45.211 pt2 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:45.211 "name": "raid_bdev1", 01:21:45.211 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:45.211 "strip_size_kb": 64, 01:21:45.211 "state": "online", 01:21:45.211 "raid_level": "concat", 01:21:45.211 "superblock": true, 01:21:45.211 "num_base_bdevs": 2, 01:21:45.211 "num_base_bdevs_discovered": 2, 01:21:45.211 "num_base_bdevs_operational": 2, 01:21:45.211 "base_bdevs_list": [ 01:21:45.211 { 01:21:45.211 "name": "pt1", 01:21:45.211 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:45.211 "is_configured": true, 01:21:45.211 "data_offset": 2048, 01:21:45.211 "data_size": 63488 01:21:45.211 }, 01:21:45.211 { 01:21:45.211 "name": "pt2", 01:21:45.211 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:45.211 "is_configured": true, 01:21:45.211 "data_offset": 2048, 01:21:45.211 "data_size": 63488 01:21:45.211 } 01:21:45.211 ] 01:21:45.211 }' 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:45.211 05:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:45.777 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:21:45.777 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:21:45.777 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:45.778 [2024-12-09 05:16:37.183594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:21:45.778 "name": "raid_bdev1", 01:21:45.778 "aliases": [ 01:21:45.778 "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c" 01:21:45.778 ], 01:21:45.778 "product_name": "Raid Volume", 01:21:45.778 "block_size": 512, 01:21:45.778 "num_blocks": 126976, 01:21:45.778 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:45.778 "assigned_rate_limits": { 01:21:45.778 "rw_ios_per_sec": 0, 01:21:45.778 "rw_mbytes_per_sec": 0, 01:21:45.778 "r_mbytes_per_sec": 0, 01:21:45.778 "w_mbytes_per_sec": 0 01:21:45.778 }, 01:21:45.778 "claimed": false, 01:21:45.778 "zoned": false, 01:21:45.778 "supported_io_types": { 01:21:45.778 "read": true, 01:21:45.778 "write": true, 01:21:45.778 "unmap": true, 01:21:45.778 "flush": true, 01:21:45.778 "reset": true, 01:21:45.778 "nvme_admin": false, 01:21:45.778 "nvme_io": false, 01:21:45.778 "nvme_io_md": false, 01:21:45.778 "write_zeroes": true, 01:21:45.778 "zcopy": false, 01:21:45.778 "get_zone_info": false, 01:21:45.778 "zone_management": false, 01:21:45.778 "zone_append": false, 01:21:45.778 "compare": false, 01:21:45.778 "compare_and_write": false, 01:21:45.778 "abort": false, 01:21:45.778 "seek_hole": false, 01:21:45.778 "seek_data": false, 01:21:45.778 "copy": false, 01:21:45.778 "nvme_iov_md": false 01:21:45.778 }, 01:21:45.778 "memory_domains": [ 01:21:45.778 { 01:21:45.778 "dma_device_id": "system", 01:21:45.778 "dma_device_type": 1 01:21:45.778 }, 01:21:45.778 { 01:21:45.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:45.778 "dma_device_type": 2 01:21:45.778 }, 01:21:45.778 { 01:21:45.778 "dma_device_id": "system", 01:21:45.778 "dma_device_type": 1 01:21:45.778 }, 01:21:45.778 { 01:21:45.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:45.778 "dma_device_type": 2 01:21:45.778 } 01:21:45.778 ], 01:21:45.778 "driver_specific": { 01:21:45.778 "raid": { 01:21:45.778 "uuid": "b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c", 01:21:45.778 "strip_size_kb": 64, 01:21:45.778 "state": "online", 01:21:45.778 "raid_level": "concat", 01:21:45.778 "superblock": true, 01:21:45.778 "num_base_bdevs": 2, 01:21:45.778 "num_base_bdevs_discovered": 2, 01:21:45.778 "num_base_bdevs_operational": 2, 01:21:45.778 "base_bdevs_list": [ 01:21:45.778 { 01:21:45.778 "name": "pt1", 01:21:45.778 "uuid": "00000000-0000-0000-0000-000000000001", 01:21:45.778 "is_configured": true, 01:21:45.778 "data_offset": 2048, 01:21:45.778 "data_size": 63488 01:21:45.778 }, 01:21:45.778 { 01:21:45.778 "name": "pt2", 01:21:45.778 "uuid": "00000000-0000-0000-0000-000000000002", 01:21:45.778 "is_configured": true, 01:21:45.778 "data_offset": 2048, 01:21:45.778 "data_size": 63488 01:21:45.778 } 01:21:45.778 ] 01:21:45.778 } 01:21:45.778 } 01:21:45.778 }' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:21:45.778 pt2' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:45.778 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:21:46.036 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:46.037 [2024-12-09 05:16:37.455318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c '!=' b85ccd0e-754a-4ba5-86e1-8a6f7a081e2c ']' 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62023 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62023 ']' 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62023 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62023 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:46.037 killing process with pid 62023 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62023' 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62023 01:21:46.037 [2024-12-09 05:16:37.534169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:46.037 05:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62023 01:21:46.037 [2024-12-09 05:16:37.534271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:46.037 [2024-12-09 05:16:37.534344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:46.037 [2024-12-09 05:16:37.534380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:21:46.295 [2024-12-09 05:16:37.689623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:47.251 05:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:21:47.251 01:21:47.251 real 0m4.780s 01:21:47.251 user 0m6.979s 01:21:47.251 sys 0m0.747s 01:21:47.251 05:16:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:47.251 ************************************ 01:21:47.251 END TEST raid_superblock_test 01:21:47.251 ************************************ 01:21:47.251 05:16:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:21:47.251 05:16:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 01:21:47.251 05:16:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:47.251 05:16:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:47.251 05:16:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:47.251 ************************************ 01:21:47.251 START TEST raid_read_error_test 01:21:47.251 ************************************ 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.93p9buL6b9 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62235 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62235 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62235 ']' 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:47.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:47.251 05:16:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:47.509 [2024-12-09 05:16:38.906000] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:47.509 [2024-12-09 05:16:38.906170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62235 ] 01:21:47.509 [2024-12-09 05:16:39.074336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:47.766 [2024-12-09 05:16:39.206344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:48.023 [2024-12-09 05:16:39.412059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:48.023 [2024-12-09 05:16:39.412143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:48.589 05:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:48.589 05:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:21:48.589 05:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:48.589 05:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:21:48.589 05:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.589 05:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.589 BaseBdev1_malloc 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.589 true 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.589 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.589 [2024-12-09 05:16:40.028575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:21:48.589 [2024-12-09 05:16:40.028657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:48.589 [2024-12-09 05:16:40.028686] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:21:48.589 [2024-12-09 05:16:40.028720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:48.589 [2024-12-09 05:16:40.031366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:48.590 [2024-12-09 05:16:40.031421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:21:48.590 BaseBdev1 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.590 BaseBdev2_malloc 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.590 true 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.590 [2024-12-09 05:16:40.085280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:21:48.590 [2024-12-09 05:16:40.085349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:48.590 [2024-12-09 05:16:40.085387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:21:48.590 [2024-12-09 05:16:40.085404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:48.590 [2024-12-09 05:16:40.088056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:48.590 [2024-12-09 05:16:40.088099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:21:48.590 BaseBdev2 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.590 [2024-12-09 05:16:40.093375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:48.590 [2024-12-09 05:16:40.095761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:48.590 [2024-12-09 05:16:40.096065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:48.590 [2024-12-09 05:16:40.096104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:48.590 [2024-12-09 05:16:40.096456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:21:48.590 [2024-12-09 05:16:40.096712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:48.590 [2024-12-09 05:16:40.096734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:21:48.590 [2024-12-09 05:16:40.096938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:48.590 "name": "raid_bdev1", 01:21:48.590 "uuid": "b78b6126-be94-4e73-b7e6-f85b25c74e80", 01:21:48.590 "strip_size_kb": 64, 01:21:48.590 "state": "online", 01:21:48.590 "raid_level": "concat", 01:21:48.590 "superblock": true, 01:21:48.590 "num_base_bdevs": 2, 01:21:48.590 "num_base_bdevs_discovered": 2, 01:21:48.590 "num_base_bdevs_operational": 2, 01:21:48.590 "base_bdevs_list": [ 01:21:48.590 { 01:21:48.590 "name": "BaseBdev1", 01:21:48.590 "uuid": "bd4bcdf3-2093-59c7-9c74-533e5b3b7a7d", 01:21:48.590 "is_configured": true, 01:21:48.590 "data_offset": 2048, 01:21:48.590 "data_size": 63488 01:21:48.590 }, 01:21:48.590 { 01:21:48.590 "name": "BaseBdev2", 01:21:48.590 "uuid": "ed4add21-5bf0-533e-87b3-9dd76381333a", 01:21:48.590 "is_configured": true, 01:21:48.590 "data_offset": 2048, 01:21:48.590 "data_size": 63488 01:21:48.590 } 01:21:48.590 ] 01:21:48.590 }' 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:48.590 05:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:49.155 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:21:49.155 05:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:21:49.155 [2024-12-09 05:16:40.767399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:50.087 "name": "raid_bdev1", 01:21:50.087 "uuid": "b78b6126-be94-4e73-b7e6-f85b25c74e80", 01:21:50.087 "strip_size_kb": 64, 01:21:50.087 "state": "online", 01:21:50.087 "raid_level": "concat", 01:21:50.087 "superblock": true, 01:21:50.087 "num_base_bdevs": 2, 01:21:50.087 "num_base_bdevs_discovered": 2, 01:21:50.087 "num_base_bdevs_operational": 2, 01:21:50.087 "base_bdevs_list": [ 01:21:50.087 { 01:21:50.087 "name": "BaseBdev1", 01:21:50.087 "uuid": "bd4bcdf3-2093-59c7-9c74-533e5b3b7a7d", 01:21:50.087 "is_configured": true, 01:21:50.087 "data_offset": 2048, 01:21:50.087 "data_size": 63488 01:21:50.087 }, 01:21:50.087 { 01:21:50.087 "name": "BaseBdev2", 01:21:50.087 "uuid": "ed4add21-5bf0-533e-87b3-9dd76381333a", 01:21:50.087 "is_configured": true, 01:21:50.087 "data_offset": 2048, 01:21:50.087 "data_size": 63488 01:21:50.087 } 01:21:50.087 ] 01:21:50.087 }' 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:50.087 05:16:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:50.654 [2024-12-09 05:16:42.169026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:50.654 [2024-12-09 05:16:42.169092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:50.654 [2024-12-09 05:16:42.172853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:50.654 [2024-12-09 05:16:42.172931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:50.654 [2024-12-09 05:16:42.172979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:50.654 [2024-12-09 05:16:42.172997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:21:50.654 { 01:21:50.654 "results": [ 01:21:50.654 { 01:21:50.654 "job": "raid_bdev1", 01:21:50.654 "core_mask": "0x1", 01:21:50.654 "workload": "randrw", 01:21:50.654 "percentage": 50, 01:21:50.654 "status": "finished", 01:21:50.654 "queue_depth": 1, 01:21:50.654 "io_size": 131072, 01:21:50.654 "runtime": 1.39926, 01:21:50.654 "iops": 11892.714720638052, 01:21:50.654 "mibps": 1486.5893400797565, 01:21:50.654 "io_failed": 1, 01:21:50.654 "io_timeout": 0, 01:21:50.654 "avg_latency_us": 117.40776261594432, 01:21:50.654 "min_latency_us": 35.60727272727273, 01:21:50.654 "max_latency_us": 1444.770909090909 01:21:50.654 } 01:21:50.654 ], 01:21:50.654 "core_count": 1 01:21:50.654 } 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62235 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62235 ']' 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62235 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62235 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:50.654 killing process with pid 62235 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62235' 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62235 01:21:50.654 [2024-12-09 05:16:42.209570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:50.654 05:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62235 01:21:50.912 [2024-12-09 05:16:42.313287] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.93p9buL6b9 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 01:21:52.290 01:21:52.290 real 0m4.673s 01:21:52.290 user 0m5.876s 01:21:52.290 sys 0m0.609s 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:52.290 05:16:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:52.290 ************************************ 01:21:52.290 END TEST raid_read_error_test 01:21:52.290 ************************************ 01:21:52.290 05:16:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 01:21:52.290 05:16:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:52.290 05:16:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:52.290 05:16:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:52.290 ************************************ 01:21:52.290 START TEST raid_write_error_test 01:21:52.290 ************************************ 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ctPhHDXx35 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62381 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62381 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62381 ']' 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:52.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:52.290 05:16:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:52.290 [2024-12-09 05:16:43.664877] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:52.290 [2024-12-09 05:16:43.665100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62381 ] 01:21:52.290 [2024-12-09 05:16:43.853086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:52.549 [2024-12-09 05:16:43.992691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:52.809 [2024-12-09 05:16:44.239205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:52.809 [2024-12-09 05:16:44.239255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:53.067 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:53.067 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:21:53.067 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:53.067 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:21:53.067 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.067 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 BaseBdev1_malloc 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 true 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 [2024-12-09 05:16:44.725070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:21:53.327 [2024-12-09 05:16:44.725141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:53.327 [2024-12-09 05:16:44.725172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:21:53.327 [2024-12-09 05:16:44.725190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:53.327 [2024-12-09 05:16:44.728237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:53.327 [2024-12-09 05:16:44.728303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:21:53.327 BaseBdev1 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 BaseBdev2_malloc 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 true 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 [2024-12-09 05:16:44.788353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:21:53.327 [2024-12-09 05:16:44.788544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:21:53.327 [2024-12-09 05:16:44.788573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:21:53.327 [2024-12-09 05:16:44.788590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:21:53.327 [2024-12-09 05:16:44.791462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:21:53.327 [2024-12-09 05:16:44.791522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:21:53.327 BaseBdev2 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 [2024-12-09 05:16:44.800495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:53.327 [2024-12-09 05:16:44.803101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:53.327 [2024-12-09 05:16:44.803341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:53.327 [2024-12-09 05:16:44.803429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:21:53.327 [2024-12-09 05:16:44.803749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:21:53.327 [2024-12-09 05:16:44.803968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:53.327 [2024-12-09 05:16:44.803987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:21:53.327 [2024-12-09 05:16:44.804187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.327 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:53.327 "name": "raid_bdev1", 01:21:53.327 "uuid": "8c5ddeb1-fa5e-4101-94f0-f74e2b4aba34", 01:21:53.327 "strip_size_kb": 64, 01:21:53.327 "state": "online", 01:21:53.327 "raid_level": "concat", 01:21:53.327 "superblock": true, 01:21:53.327 "num_base_bdevs": 2, 01:21:53.327 "num_base_bdevs_discovered": 2, 01:21:53.327 "num_base_bdevs_operational": 2, 01:21:53.327 "base_bdevs_list": [ 01:21:53.327 { 01:21:53.327 "name": "BaseBdev1", 01:21:53.327 "uuid": "2965717c-c288-53b3-a19d-0f2e7434e29c", 01:21:53.327 "is_configured": true, 01:21:53.327 "data_offset": 2048, 01:21:53.327 "data_size": 63488 01:21:53.327 }, 01:21:53.327 { 01:21:53.327 "name": "BaseBdev2", 01:21:53.327 "uuid": "b0d3da82-5835-5acd-9706-f4e2ad567979", 01:21:53.328 "is_configured": true, 01:21:53.328 "data_offset": 2048, 01:21:53.328 "data_size": 63488 01:21:53.328 } 01:21:53.328 ] 01:21:53.328 }' 01:21:53.328 05:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:53.328 05:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:53.896 05:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:21:53.896 05:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:21:53.896 [2024-12-09 05:16:45.446099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:54.880 "name": "raid_bdev1", 01:21:54.880 "uuid": "8c5ddeb1-fa5e-4101-94f0-f74e2b4aba34", 01:21:54.880 "strip_size_kb": 64, 01:21:54.880 "state": "online", 01:21:54.880 "raid_level": "concat", 01:21:54.880 "superblock": true, 01:21:54.880 "num_base_bdevs": 2, 01:21:54.880 "num_base_bdevs_discovered": 2, 01:21:54.880 "num_base_bdevs_operational": 2, 01:21:54.880 "base_bdevs_list": [ 01:21:54.880 { 01:21:54.880 "name": "BaseBdev1", 01:21:54.880 "uuid": "2965717c-c288-53b3-a19d-0f2e7434e29c", 01:21:54.880 "is_configured": true, 01:21:54.880 "data_offset": 2048, 01:21:54.880 "data_size": 63488 01:21:54.880 }, 01:21:54.880 { 01:21:54.880 "name": "BaseBdev2", 01:21:54.880 "uuid": "b0d3da82-5835-5acd-9706-f4e2ad567979", 01:21:54.880 "is_configured": true, 01:21:54.880 "data_offset": 2048, 01:21:54.880 "data_size": 63488 01:21:54.880 } 01:21:54.880 ] 01:21:54.880 }' 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:54.880 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:55.463 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:21:55.463 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:55.463 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:55.463 [2024-12-09 05:16:46.899911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:21:55.463 [2024-12-09 05:16:46.900107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:21:55.463 [2024-12-09 05:16:46.903960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:21:55.463 [2024-12-09 05:16:46.904241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:55.463 [2024-12-09 05:16:46.904453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:21:55.463 [2024-12-09 05:16:46.904617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:21:55.463 { 01:21:55.463 "results": [ 01:21:55.463 { 01:21:55.463 "job": "raid_bdev1", 01:21:55.463 "core_mask": "0x1", 01:21:55.463 "workload": "randrw", 01:21:55.463 "percentage": 50, 01:21:55.463 "status": "finished", 01:21:55.463 "queue_depth": 1, 01:21:55.463 "io_size": 131072, 01:21:55.463 "runtime": 1.451671, 01:21:55.463 "iops": 10968.04992315752, 01:21:55.463 "mibps": 1371.00624039469, 01:21:55.463 "io_failed": 1, 01:21:55.463 "io_timeout": 0, 01:21:55.463 "avg_latency_us": 126.68457862554452, 01:21:55.463 "min_latency_us": 34.90909090909091, 01:21:55.464 "max_latency_us": 1690.530909090909 01:21:55.464 } 01:21:55.464 ], 01:21:55.464 "core_count": 1 01:21:55.464 } 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62381 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62381 ']' 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62381 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62381 01:21:55.464 killing process with pid 62381 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62381' 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62381 01:21:55.464 [2024-12-09 05:16:46.946345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:21:55.464 05:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62381 01:21:55.722 [2024-12-09 05:16:47.078591] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ctPhHDXx35 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 01:21:56.657 01:21:56.657 real 0m4.708s 01:21:56.657 user 0m5.860s 01:21:56.657 sys 0m0.619s 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:56.657 05:16:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:21:56.657 ************************************ 01:21:56.657 END TEST raid_write_error_test 01:21:56.657 ************************************ 01:21:56.915 05:16:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:21:56.915 05:16:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 01:21:56.915 05:16:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:21:56.915 05:16:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:56.915 05:16:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:21:56.915 ************************************ 01:21:56.915 START TEST raid_state_function_test 01:21:56.915 ************************************ 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:21:56.915 Process raid pid: 62525 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:21:56.915 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62525 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62525' 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62525 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62525 ']' 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:56.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:56.916 05:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:56.916 [2024-12-09 05:16:48.396855] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:56.916 [2024-12-09 05:16:48.397208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:57.173 [2024-12-09 05:16:48.569129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:57.173 [2024-12-09 05:16:48.703331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:57.432 [2024-12-09 05:16:48.919713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:57.432 [2024-12-09 05:16:48.919761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:57.999 [2024-12-09 05:16:49.466894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:57.999 [2024-12-09 05:16:49.466962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:57.999 [2024-12-09 05:16:49.466987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:57.999 [2024-12-09 05:16:49.467002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:57.999 "name": "Existed_Raid", 01:21:57.999 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:57.999 "strip_size_kb": 0, 01:21:57.999 "state": "configuring", 01:21:57.999 "raid_level": "raid1", 01:21:57.999 "superblock": false, 01:21:57.999 "num_base_bdevs": 2, 01:21:57.999 "num_base_bdevs_discovered": 0, 01:21:57.999 "num_base_bdevs_operational": 2, 01:21:57.999 "base_bdevs_list": [ 01:21:57.999 { 01:21:57.999 "name": "BaseBdev1", 01:21:57.999 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:57.999 "is_configured": false, 01:21:57.999 "data_offset": 0, 01:21:57.999 "data_size": 0 01:21:57.999 }, 01:21:57.999 { 01:21:57.999 "name": "BaseBdev2", 01:21:57.999 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:57.999 "is_configured": false, 01:21:57.999 "data_offset": 0, 01:21:57.999 "data_size": 0 01:21:57.999 } 01:21:57.999 ] 01:21:57.999 }' 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:57.999 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.565 [2024-12-09 05:16:49.978971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:58.565 [2024-12-09 05:16:49.979135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.565 [2024-12-09 05:16:49.986960] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:21:58.565 [2024-12-09 05:16:49.987006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:21:58.565 [2024-12-09 05:16:49.987019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:58.565 [2024-12-09 05:16:49.987036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.565 05:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.565 [2024-12-09 05:16:50.038165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:58.565 BaseBdev1 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.565 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.565 [ 01:21:58.565 { 01:21:58.565 "name": "BaseBdev1", 01:21:58.565 "aliases": [ 01:21:58.565 "74cb50c0-d29b-4951-a22a-487c9e9ccc68" 01:21:58.565 ], 01:21:58.565 "product_name": "Malloc disk", 01:21:58.566 "block_size": 512, 01:21:58.566 "num_blocks": 65536, 01:21:58.566 "uuid": "74cb50c0-d29b-4951-a22a-487c9e9ccc68", 01:21:58.566 "assigned_rate_limits": { 01:21:58.566 "rw_ios_per_sec": 0, 01:21:58.566 "rw_mbytes_per_sec": 0, 01:21:58.566 "r_mbytes_per_sec": 0, 01:21:58.566 "w_mbytes_per_sec": 0 01:21:58.566 }, 01:21:58.566 "claimed": true, 01:21:58.566 "claim_type": "exclusive_write", 01:21:58.566 "zoned": false, 01:21:58.566 "supported_io_types": { 01:21:58.566 "read": true, 01:21:58.566 "write": true, 01:21:58.566 "unmap": true, 01:21:58.566 "flush": true, 01:21:58.566 "reset": true, 01:21:58.566 "nvme_admin": false, 01:21:58.566 "nvme_io": false, 01:21:58.566 "nvme_io_md": false, 01:21:58.566 "write_zeroes": true, 01:21:58.566 "zcopy": true, 01:21:58.566 "get_zone_info": false, 01:21:58.566 "zone_management": false, 01:21:58.566 "zone_append": false, 01:21:58.566 "compare": false, 01:21:58.566 "compare_and_write": false, 01:21:58.566 "abort": true, 01:21:58.566 "seek_hole": false, 01:21:58.566 "seek_data": false, 01:21:58.566 "copy": true, 01:21:58.566 "nvme_iov_md": false 01:21:58.566 }, 01:21:58.566 "memory_domains": [ 01:21:58.566 { 01:21:58.566 "dma_device_id": "system", 01:21:58.566 "dma_device_type": 1 01:21:58.566 }, 01:21:58.566 { 01:21:58.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:58.566 "dma_device_type": 2 01:21:58.566 } 01:21:58.566 ], 01:21:58.566 "driver_specific": {} 01:21:58.566 } 01:21:58.566 ] 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:58.566 "name": "Existed_Raid", 01:21:58.566 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:58.566 "strip_size_kb": 0, 01:21:58.566 "state": "configuring", 01:21:58.566 "raid_level": "raid1", 01:21:58.566 "superblock": false, 01:21:58.566 "num_base_bdevs": 2, 01:21:58.566 "num_base_bdevs_discovered": 1, 01:21:58.566 "num_base_bdevs_operational": 2, 01:21:58.566 "base_bdevs_list": [ 01:21:58.566 { 01:21:58.566 "name": "BaseBdev1", 01:21:58.566 "uuid": "74cb50c0-d29b-4951-a22a-487c9e9ccc68", 01:21:58.566 "is_configured": true, 01:21:58.566 "data_offset": 0, 01:21:58.566 "data_size": 65536 01:21:58.566 }, 01:21:58.566 { 01:21:58.566 "name": "BaseBdev2", 01:21:58.566 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:58.566 "is_configured": false, 01:21:58.566 "data_offset": 0, 01:21:58.566 "data_size": 0 01:21:58.566 } 01:21:58.566 ] 01:21:58.566 }' 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:58.566 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.130 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:21:59.130 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.130 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.130 [2024-12-09 05:16:50.574393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:21:59.130 [2024-12-09 05:16:50.574473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:21:59.130 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.146 [2024-12-09 05:16:50.582377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:21:59.146 [2024-12-09 05:16:50.585002] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:21:59.146 [2024-12-09 05:16:50.585086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:59.146 "name": "Existed_Raid", 01:21:59.146 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:59.146 "strip_size_kb": 0, 01:21:59.146 "state": "configuring", 01:21:59.146 "raid_level": "raid1", 01:21:59.146 "superblock": false, 01:21:59.146 "num_base_bdevs": 2, 01:21:59.146 "num_base_bdevs_discovered": 1, 01:21:59.146 "num_base_bdevs_operational": 2, 01:21:59.146 "base_bdevs_list": [ 01:21:59.146 { 01:21:59.146 "name": "BaseBdev1", 01:21:59.146 "uuid": "74cb50c0-d29b-4951-a22a-487c9e9ccc68", 01:21:59.146 "is_configured": true, 01:21:59.146 "data_offset": 0, 01:21:59.146 "data_size": 65536 01:21:59.146 }, 01:21:59.146 { 01:21:59.146 "name": "BaseBdev2", 01:21:59.146 "uuid": "00000000-0000-0000-0000-000000000000", 01:21:59.146 "is_configured": false, 01:21:59.146 "data_offset": 0, 01:21:59.146 "data_size": 0 01:21:59.146 } 01:21:59.146 ] 01:21:59.146 }' 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:59.146 05:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.712 [2024-12-09 05:16:51.129692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:21:59.712 [2024-12-09 05:16:51.129784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:21:59.712 [2024-12-09 05:16:51.129799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:21:59.712 [2024-12-09 05:16:51.130203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:21:59.712 [2024-12-09 05:16:51.130474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:21:59.712 [2024-12-09 05:16:51.130825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:21:59.712 [2024-12-09 05:16:51.131216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:21:59.712 BaseBdev2 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.712 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.712 [ 01:21:59.712 { 01:21:59.712 "name": "BaseBdev2", 01:21:59.712 "aliases": [ 01:21:59.712 "36ba921a-b040-491e-b35a-486e354016da" 01:21:59.712 ], 01:21:59.712 "product_name": "Malloc disk", 01:21:59.712 "block_size": 512, 01:21:59.712 "num_blocks": 65536, 01:21:59.712 "uuid": "36ba921a-b040-491e-b35a-486e354016da", 01:21:59.712 "assigned_rate_limits": { 01:21:59.712 "rw_ios_per_sec": 0, 01:21:59.713 "rw_mbytes_per_sec": 0, 01:21:59.713 "r_mbytes_per_sec": 0, 01:21:59.713 "w_mbytes_per_sec": 0 01:21:59.713 }, 01:21:59.713 "claimed": true, 01:21:59.713 "claim_type": "exclusive_write", 01:21:59.713 "zoned": false, 01:21:59.713 "supported_io_types": { 01:21:59.713 "read": true, 01:21:59.713 "write": true, 01:21:59.713 "unmap": true, 01:21:59.713 "flush": true, 01:21:59.713 "reset": true, 01:21:59.713 "nvme_admin": false, 01:21:59.713 "nvme_io": false, 01:21:59.713 "nvme_io_md": false, 01:21:59.713 "write_zeroes": true, 01:21:59.713 "zcopy": true, 01:21:59.713 "get_zone_info": false, 01:21:59.713 "zone_management": false, 01:21:59.713 "zone_append": false, 01:21:59.713 "compare": false, 01:21:59.713 "compare_and_write": false, 01:21:59.713 "abort": true, 01:21:59.713 "seek_hole": false, 01:21:59.713 "seek_data": false, 01:21:59.713 "copy": true, 01:21:59.713 "nvme_iov_md": false 01:21:59.713 }, 01:21:59.713 "memory_domains": [ 01:21:59.713 { 01:21:59.713 "dma_device_id": "system", 01:21:59.713 "dma_device_type": 1 01:21:59.713 }, 01:21:59.713 { 01:21:59.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:21:59.713 "dma_device_type": 2 01:21:59.713 } 01:21:59.713 ], 01:21:59.713 "driver_specific": {} 01:21:59.713 } 01:21:59.713 ] 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:21:59.713 "name": "Existed_Raid", 01:21:59.713 "uuid": "770ae3d8-311f-40a5-8211-6eb059046481", 01:21:59.713 "strip_size_kb": 0, 01:21:59.713 "state": "online", 01:21:59.713 "raid_level": "raid1", 01:21:59.713 "superblock": false, 01:21:59.713 "num_base_bdevs": 2, 01:21:59.713 "num_base_bdevs_discovered": 2, 01:21:59.713 "num_base_bdevs_operational": 2, 01:21:59.713 "base_bdevs_list": [ 01:21:59.713 { 01:21:59.713 "name": "BaseBdev1", 01:21:59.713 "uuid": "74cb50c0-d29b-4951-a22a-487c9e9ccc68", 01:21:59.713 "is_configured": true, 01:21:59.713 "data_offset": 0, 01:21:59.713 "data_size": 65536 01:21:59.713 }, 01:21:59.713 { 01:21:59.713 "name": "BaseBdev2", 01:21:59.713 "uuid": "36ba921a-b040-491e-b35a-486e354016da", 01:21:59.713 "is_configured": true, 01:21:59.713 "data_offset": 0, 01:21:59.713 "data_size": 65536 01:21:59.713 } 01:21:59.713 ] 01:21:59.713 }' 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:21:59.713 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:00.277 [2024-12-09 05:16:51.678201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:00.277 "name": "Existed_Raid", 01:22:00.277 "aliases": [ 01:22:00.277 "770ae3d8-311f-40a5-8211-6eb059046481" 01:22:00.277 ], 01:22:00.277 "product_name": "Raid Volume", 01:22:00.277 "block_size": 512, 01:22:00.277 "num_blocks": 65536, 01:22:00.277 "uuid": "770ae3d8-311f-40a5-8211-6eb059046481", 01:22:00.277 "assigned_rate_limits": { 01:22:00.277 "rw_ios_per_sec": 0, 01:22:00.277 "rw_mbytes_per_sec": 0, 01:22:00.277 "r_mbytes_per_sec": 0, 01:22:00.277 "w_mbytes_per_sec": 0 01:22:00.277 }, 01:22:00.277 "claimed": false, 01:22:00.277 "zoned": false, 01:22:00.277 "supported_io_types": { 01:22:00.277 "read": true, 01:22:00.277 "write": true, 01:22:00.277 "unmap": false, 01:22:00.277 "flush": false, 01:22:00.277 "reset": true, 01:22:00.277 "nvme_admin": false, 01:22:00.277 "nvme_io": false, 01:22:00.277 "nvme_io_md": false, 01:22:00.277 "write_zeroes": true, 01:22:00.277 "zcopy": false, 01:22:00.277 "get_zone_info": false, 01:22:00.277 "zone_management": false, 01:22:00.277 "zone_append": false, 01:22:00.277 "compare": false, 01:22:00.277 "compare_and_write": false, 01:22:00.277 "abort": false, 01:22:00.277 "seek_hole": false, 01:22:00.277 "seek_data": false, 01:22:00.277 "copy": false, 01:22:00.277 "nvme_iov_md": false 01:22:00.277 }, 01:22:00.277 "memory_domains": [ 01:22:00.277 { 01:22:00.277 "dma_device_id": "system", 01:22:00.277 "dma_device_type": 1 01:22:00.277 }, 01:22:00.277 { 01:22:00.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:00.277 "dma_device_type": 2 01:22:00.277 }, 01:22:00.277 { 01:22:00.277 "dma_device_id": "system", 01:22:00.277 "dma_device_type": 1 01:22:00.277 }, 01:22:00.277 { 01:22:00.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:00.277 "dma_device_type": 2 01:22:00.277 } 01:22:00.277 ], 01:22:00.277 "driver_specific": { 01:22:00.277 "raid": { 01:22:00.277 "uuid": "770ae3d8-311f-40a5-8211-6eb059046481", 01:22:00.277 "strip_size_kb": 0, 01:22:00.277 "state": "online", 01:22:00.277 "raid_level": "raid1", 01:22:00.277 "superblock": false, 01:22:00.277 "num_base_bdevs": 2, 01:22:00.277 "num_base_bdevs_discovered": 2, 01:22:00.277 "num_base_bdevs_operational": 2, 01:22:00.277 "base_bdevs_list": [ 01:22:00.277 { 01:22:00.277 "name": "BaseBdev1", 01:22:00.277 "uuid": "74cb50c0-d29b-4951-a22a-487c9e9ccc68", 01:22:00.277 "is_configured": true, 01:22:00.277 "data_offset": 0, 01:22:00.277 "data_size": 65536 01:22:00.277 }, 01:22:00.277 { 01:22:00.277 "name": "BaseBdev2", 01:22:00.277 "uuid": "36ba921a-b040-491e-b35a-486e354016da", 01:22:00.277 "is_configured": true, 01:22:00.277 "data_offset": 0, 01:22:00.277 "data_size": 65536 01:22:00.277 } 01:22:00.277 ] 01:22:00.277 } 01:22:00.277 } 01:22:00.277 }' 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:22:00.277 BaseBdev2' 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:00.277 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.278 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:00.535 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.535 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:00.535 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:00.535 05:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:22:00.535 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.535 05:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:00.535 [2024-12-09 05:16:51.957990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:00.535 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:00.536 "name": "Existed_Raid", 01:22:00.536 "uuid": "770ae3d8-311f-40a5-8211-6eb059046481", 01:22:00.536 "strip_size_kb": 0, 01:22:00.536 "state": "online", 01:22:00.536 "raid_level": "raid1", 01:22:00.536 "superblock": false, 01:22:00.536 "num_base_bdevs": 2, 01:22:00.536 "num_base_bdevs_discovered": 1, 01:22:00.536 "num_base_bdevs_operational": 1, 01:22:00.536 "base_bdevs_list": [ 01:22:00.536 { 01:22:00.536 "name": null, 01:22:00.536 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:00.536 "is_configured": false, 01:22:00.536 "data_offset": 0, 01:22:00.536 "data_size": 65536 01:22:00.536 }, 01:22:00.536 { 01:22:00.536 "name": "BaseBdev2", 01:22:00.536 "uuid": "36ba921a-b040-491e-b35a-486e354016da", 01:22:00.536 "is_configured": true, 01:22:00.536 "data_offset": 0, 01:22:00.536 "data_size": 65536 01:22:00.536 } 01:22:00.536 ] 01:22:00.536 }' 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:00.536 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:01.102 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:01.102 [2024-12-09 05:16:52.652330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:22:01.102 [2024-12-09 05:16:52.652721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:01.361 [2024-12-09 05:16:52.737340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:01.361 [2024-12-09 05:16:52.737460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:01.361 [2024-12-09 05:16:52.737482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62525 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62525 ']' 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62525 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:22:01.361 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:01.362 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62525 01:22:01.362 killing process with pid 62525 01:22:01.362 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:01.362 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:01.362 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62525' 01:22:01.362 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62525 01:22:01.362 [2024-12-09 05:16:52.819594] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:01.362 05:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62525 01:22:01.362 [2024-12-09 05:16:52.833966] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:22:02.739 01:22:02.739 real 0m5.627s 01:22:02.739 user 0m8.456s 01:22:02.739 sys 0m0.821s 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:02.739 ************************************ 01:22:02.739 END TEST raid_state_function_test 01:22:02.739 ************************************ 01:22:02.739 05:16:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 01:22:02.739 05:16:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:22:02.739 05:16:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:02.739 05:16:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:02.739 ************************************ 01:22:02.739 START TEST raid_state_function_test_sb 01:22:02.739 ************************************ 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:22:02.739 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:22:02.740 Process raid pid: 62783 01:22:02.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62783 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62783' 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62783 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62783 ']' 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:02.740 05:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:02.740 [2024-12-09 05:16:54.101473] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:02.740 [2024-12-09 05:16:54.101989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:22:02.740 [2024-12-09 05:16:54.284012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:02.998 [2024-12-09 05:16:54.412642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:03.272 [2024-12-09 05:16:54.620437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:03.272 [2024-12-09 05:16:54.620665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:03.530 [2024-12-09 05:16:55.138372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:03.530 [2024-12-09 05:16:55.138456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:03.530 [2024-12-09 05:16:55.138472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:03.530 [2024-12-09 05:16:55.138487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:03.530 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:03.788 "name": "Existed_Raid", 01:22:03.788 "uuid": "1a286bfb-c8ac-44b4-9a1a-252f3b4978d2", 01:22:03.788 "strip_size_kb": 0, 01:22:03.788 "state": "configuring", 01:22:03.788 "raid_level": "raid1", 01:22:03.788 "superblock": true, 01:22:03.788 "num_base_bdevs": 2, 01:22:03.788 "num_base_bdevs_discovered": 0, 01:22:03.788 "num_base_bdevs_operational": 2, 01:22:03.788 "base_bdevs_list": [ 01:22:03.788 { 01:22:03.788 "name": "BaseBdev1", 01:22:03.788 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:03.788 "is_configured": false, 01:22:03.788 "data_offset": 0, 01:22:03.788 "data_size": 0 01:22:03.788 }, 01:22:03.788 { 01:22:03.788 "name": "BaseBdev2", 01:22:03.788 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:03.788 "is_configured": false, 01:22:03.788 "data_offset": 0, 01:22:03.788 "data_size": 0 01:22:03.788 } 01:22:03.788 ] 01:22:03.788 }' 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:03.788 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.047 [2024-12-09 05:16:55.622407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:04.047 [2024-12-09 05:16:55.622649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.047 [2024-12-09 05:16:55.630407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:04.047 [2024-12-09 05:16:55.630604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:04.047 [2024-12-09 05:16:55.630629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:04.047 [2024-12-09 05:16:55.630649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.047 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.305 [2024-12-09 05:16:55.676236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:04.305 BaseBdev1 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.305 [ 01:22:04.305 { 01:22:04.305 "name": "BaseBdev1", 01:22:04.305 "aliases": [ 01:22:04.305 "0ca7d126-77af-4479-abfd-d96205bf4912" 01:22:04.305 ], 01:22:04.305 "product_name": "Malloc disk", 01:22:04.305 "block_size": 512, 01:22:04.305 "num_blocks": 65536, 01:22:04.305 "uuid": "0ca7d126-77af-4479-abfd-d96205bf4912", 01:22:04.305 "assigned_rate_limits": { 01:22:04.305 "rw_ios_per_sec": 0, 01:22:04.305 "rw_mbytes_per_sec": 0, 01:22:04.305 "r_mbytes_per_sec": 0, 01:22:04.305 "w_mbytes_per_sec": 0 01:22:04.305 }, 01:22:04.305 "claimed": true, 01:22:04.305 "claim_type": "exclusive_write", 01:22:04.305 "zoned": false, 01:22:04.305 "supported_io_types": { 01:22:04.305 "read": true, 01:22:04.305 "write": true, 01:22:04.305 "unmap": true, 01:22:04.305 "flush": true, 01:22:04.305 "reset": true, 01:22:04.305 "nvme_admin": false, 01:22:04.305 "nvme_io": false, 01:22:04.305 "nvme_io_md": false, 01:22:04.305 "write_zeroes": true, 01:22:04.305 "zcopy": true, 01:22:04.305 "get_zone_info": false, 01:22:04.305 "zone_management": false, 01:22:04.305 "zone_append": false, 01:22:04.305 "compare": false, 01:22:04.305 "compare_and_write": false, 01:22:04.305 "abort": true, 01:22:04.305 "seek_hole": false, 01:22:04.305 "seek_data": false, 01:22:04.305 "copy": true, 01:22:04.305 "nvme_iov_md": false 01:22:04.305 }, 01:22:04.305 "memory_domains": [ 01:22:04.305 { 01:22:04.305 "dma_device_id": "system", 01:22:04.305 "dma_device_type": 1 01:22:04.305 }, 01:22:04.305 { 01:22:04.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:04.305 "dma_device_type": 2 01:22:04.305 } 01:22:04.305 ], 01:22:04.305 "driver_specific": {} 01:22:04.305 } 01:22:04.305 ] 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.305 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:04.305 "name": "Existed_Raid", 01:22:04.305 "uuid": "23a35cdd-e669-4be6-a34a-dbde7f58e7a8", 01:22:04.305 "strip_size_kb": 0, 01:22:04.305 "state": "configuring", 01:22:04.305 "raid_level": "raid1", 01:22:04.305 "superblock": true, 01:22:04.306 "num_base_bdevs": 2, 01:22:04.306 "num_base_bdevs_discovered": 1, 01:22:04.306 "num_base_bdevs_operational": 2, 01:22:04.306 "base_bdevs_list": [ 01:22:04.306 { 01:22:04.306 "name": "BaseBdev1", 01:22:04.306 "uuid": "0ca7d126-77af-4479-abfd-d96205bf4912", 01:22:04.306 "is_configured": true, 01:22:04.306 "data_offset": 2048, 01:22:04.306 "data_size": 63488 01:22:04.306 }, 01:22:04.306 { 01:22:04.306 "name": "BaseBdev2", 01:22:04.306 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:04.306 "is_configured": false, 01:22:04.306 "data_offset": 0, 01:22:04.306 "data_size": 0 01:22:04.306 } 01:22:04.306 ] 01:22:04.306 }' 01:22:04.306 05:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:04.306 05:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.872 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:04.872 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.872 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.872 [2024-12-09 05:16:56.220344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:04.872 [2024-12-09 05:16:56.220406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:22:04.872 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.873 [2024-12-09 05:16:56.232404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:04.873 [2024-12-09 05:16:56.234684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:04.873 [2024-12-09 05:16:56.234740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:04.873 "name": "Existed_Raid", 01:22:04.873 "uuid": "4a99e6e1-33bc-4b35-a40d-782400b08581", 01:22:04.873 "strip_size_kb": 0, 01:22:04.873 "state": "configuring", 01:22:04.873 "raid_level": "raid1", 01:22:04.873 "superblock": true, 01:22:04.873 "num_base_bdevs": 2, 01:22:04.873 "num_base_bdevs_discovered": 1, 01:22:04.873 "num_base_bdevs_operational": 2, 01:22:04.873 "base_bdevs_list": [ 01:22:04.873 { 01:22:04.873 "name": "BaseBdev1", 01:22:04.873 "uuid": "0ca7d126-77af-4479-abfd-d96205bf4912", 01:22:04.873 "is_configured": true, 01:22:04.873 "data_offset": 2048, 01:22:04.873 "data_size": 63488 01:22:04.873 }, 01:22:04.873 { 01:22:04.873 "name": "BaseBdev2", 01:22:04.873 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:04.873 "is_configured": false, 01:22:04.873 "data_offset": 0, 01:22:04.873 "data_size": 0 01:22:04.873 } 01:22:04.873 ] 01:22:04.873 }' 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:04.873 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.132 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:22:05.132 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.132 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.390 [2024-12-09 05:16:56.779481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:05.391 [2024-12-09 05:16:56.779767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:05.391 [2024-12-09 05:16:56.779785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:05.391 BaseBdev2 01:22:05.391 [2024-12-09 05:16:56.780071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:22:05.391 [2024-12-09 05:16:56.780268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:05.391 [2024-12-09 05:16:56.780297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:22:05.391 [2024-12-09 05:16:56.780472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.391 [ 01:22:05.391 { 01:22:05.391 "name": "BaseBdev2", 01:22:05.391 "aliases": [ 01:22:05.391 "a1eaf04d-ef57-411d-bd72-e1aefe14fdf7" 01:22:05.391 ], 01:22:05.391 "product_name": "Malloc disk", 01:22:05.391 "block_size": 512, 01:22:05.391 "num_blocks": 65536, 01:22:05.391 "uuid": "a1eaf04d-ef57-411d-bd72-e1aefe14fdf7", 01:22:05.391 "assigned_rate_limits": { 01:22:05.391 "rw_ios_per_sec": 0, 01:22:05.391 "rw_mbytes_per_sec": 0, 01:22:05.391 "r_mbytes_per_sec": 0, 01:22:05.391 "w_mbytes_per_sec": 0 01:22:05.391 }, 01:22:05.391 "claimed": true, 01:22:05.391 "claim_type": "exclusive_write", 01:22:05.391 "zoned": false, 01:22:05.391 "supported_io_types": { 01:22:05.391 "read": true, 01:22:05.391 "write": true, 01:22:05.391 "unmap": true, 01:22:05.391 "flush": true, 01:22:05.391 "reset": true, 01:22:05.391 "nvme_admin": false, 01:22:05.391 "nvme_io": false, 01:22:05.391 "nvme_io_md": false, 01:22:05.391 "write_zeroes": true, 01:22:05.391 "zcopy": true, 01:22:05.391 "get_zone_info": false, 01:22:05.391 "zone_management": false, 01:22:05.391 "zone_append": false, 01:22:05.391 "compare": false, 01:22:05.391 "compare_and_write": false, 01:22:05.391 "abort": true, 01:22:05.391 "seek_hole": false, 01:22:05.391 "seek_data": false, 01:22:05.391 "copy": true, 01:22:05.391 "nvme_iov_md": false 01:22:05.391 }, 01:22:05.391 "memory_domains": [ 01:22:05.391 { 01:22:05.391 "dma_device_id": "system", 01:22:05.391 "dma_device_type": 1 01:22:05.391 }, 01:22:05.391 { 01:22:05.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:05.391 "dma_device_type": 2 01:22:05.391 } 01:22:05.391 ], 01:22:05.391 "driver_specific": {} 01:22:05.391 } 01:22:05.391 ] 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:05.391 "name": "Existed_Raid", 01:22:05.391 "uuid": "4a99e6e1-33bc-4b35-a40d-782400b08581", 01:22:05.391 "strip_size_kb": 0, 01:22:05.391 "state": "online", 01:22:05.391 "raid_level": "raid1", 01:22:05.391 "superblock": true, 01:22:05.391 "num_base_bdevs": 2, 01:22:05.391 "num_base_bdevs_discovered": 2, 01:22:05.391 "num_base_bdevs_operational": 2, 01:22:05.391 "base_bdevs_list": [ 01:22:05.391 { 01:22:05.391 "name": "BaseBdev1", 01:22:05.391 "uuid": "0ca7d126-77af-4479-abfd-d96205bf4912", 01:22:05.391 "is_configured": true, 01:22:05.391 "data_offset": 2048, 01:22:05.391 "data_size": 63488 01:22:05.391 }, 01:22:05.391 { 01:22:05.391 "name": "BaseBdev2", 01:22:05.391 "uuid": "a1eaf04d-ef57-411d-bd72-e1aefe14fdf7", 01:22:05.391 "is_configured": true, 01:22:05.391 "data_offset": 2048, 01:22:05.391 "data_size": 63488 01:22:05.391 } 01:22:05.391 ] 01:22:05.391 }' 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:05.391 05:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:05.957 [2024-12-09 05:16:57.323880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.957 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:05.957 "name": "Existed_Raid", 01:22:05.957 "aliases": [ 01:22:05.957 "4a99e6e1-33bc-4b35-a40d-782400b08581" 01:22:05.957 ], 01:22:05.957 "product_name": "Raid Volume", 01:22:05.957 "block_size": 512, 01:22:05.957 "num_blocks": 63488, 01:22:05.957 "uuid": "4a99e6e1-33bc-4b35-a40d-782400b08581", 01:22:05.957 "assigned_rate_limits": { 01:22:05.957 "rw_ios_per_sec": 0, 01:22:05.957 "rw_mbytes_per_sec": 0, 01:22:05.957 "r_mbytes_per_sec": 0, 01:22:05.957 "w_mbytes_per_sec": 0 01:22:05.957 }, 01:22:05.957 "claimed": false, 01:22:05.957 "zoned": false, 01:22:05.957 "supported_io_types": { 01:22:05.957 "read": true, 01:22:05.957 "write": true, 01:22:05.957 "unmap": false, 01:22:05.957 "flush": false, 01:22:05.957 "reset": true, 01:22:05.957 "nvme_admin": false, 01:22:05.957 "nvme_io": false, 01:22:05.957 "nvme_io_md": false, 01:22:05.957 "write_zeroes": true, 01:22:05.957 "zcopy": false, 01:22:05.957 "get_zone_info": false, 01:22:05.957 "zone_management": false, 01:22:05.957 "zone_append": false, 01:22:05.957 "compare": false, 01:22:05.957 "compare_and_write": false, 01:22:05.957 "abort": false, 01:22:05.957 "seek_hole": false, 01:22:05.957 "seek_data": false, 01:22:05.957 "copy": false, 01:22:05.957 "nvme_iov_md": false 01:22:05.957 }, 01:22:05.957 "memory_domains": [ 01:22:05.957 { 01:22:05.957 "dma_device_id": "system", 01:22:05.957 "dma_device_type": 1 01:22:05.957 }, 01:22:05.957 { 01:22:05.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:05.957 "dma_device_type": 2 01:22:05.957 }, 01:22:05.957 { 01:22:05.957 "dma_device_id": "system", 01:22:05.957 "dma_device_type": 1 01:22:05.957 }, 01:22:05.957 { 01:22:05.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:05.957 "dma_device_type": 2 01:22:05.957 } 01:22:05.957 ], 01:22:05.957 "driver_specific": { 01:22:05.957 "raid": { 01:22:05.957 "uuid": "4a99e6e1-33bc-4b35-a40d-782400b08581", 01:22:05.957 "strip_size_kb": 0, 01:22:05.957 "state": "online", 01:22:05.957 "raid_level": "raid1", 01:22:05.957 "superblock": true, 01:22:05.957 "num_base_bdevs": 2, 01:22:05.957 "num_base_bdevs_discovered": 2, 01:22:05.957 "num_base_bdevs_operational": 2, 01:22:05.957 "base_bdevs_list": [ 01:22:05.957 { 01:22:05.957 "name": "BaseBdev1", 01:22:05.957 "uuid": "0ca7d126-77af-4479-abfd-d96205bf4912", 01:22:05.957 "is_configured": true, 01:22:05.957 "data_offset": 2048, 01:22:05.957 "data_size": 63488 01:22:05.957 }, 01:22:05.957 { 01:22:05.957 "name": "BaseBdev2", 01:22:05.957 "uuid": "a1eaf04d-ef57-411d-bd72-e1aefe14fdf7", 01:22:05.957 "is_configured": true, 01:22:05.957 "data_offset": 2048, 01:22:05.957 "data_size": 63488 01:22:05.957 } 01:22:05.957 ] 01:22:05.957 } 01:22:05.958 } 01:22:05.958 }' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:22:05.958 BaseBdev2' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:05.958 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:06.216 [2024-12-09 05:16:57.591707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:06.216 "name": "Existed_Raid", 01:22:06.216 "uuid": "4a99e6e1-33bc-4b35-a40d-782400b08581", 01:22:06.216 "strip_size_kb": 0, 01:22:06.216 "state": "online", 01:22:06.216 "raid_level": "raid1", 01:22:06.216 "superblock": true, 01:22:06.216 "num_base_bdevs": 2, 01:22:06.216 "num_base_bdevs_discovered": 1, 01:22:06.216 "num_base_bdevs_operational": 1, 01:22:06.216 "base_bdevs_list": [ 01:22:06.216 { 01:22:06.216 "name": null, 01:22:06.216 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:06.216 "is_configured": false, 01:22:06.216 "data_offset": 0, 01:22:06.216 "data_size": 63488 01:22:06.216 }, 01:22:06.216 { 01:22:06.216 "name": "BaseBdev2", 01:22:06.216 "uuid": "a1eaf04d-ef57-411d-bd72-e1aefe14fdf7", 01:22:06.216 "is_configured": true, 01:22:06.216 "data_offset": 2048, 01:22:06.216 "data_size": 63488 01:22:06.216 } 01:22:06.216 ] 01:22:06.216 }' 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:06.216 05:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:06.782 [2024-12-09 05:16:58.261448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:22:06.782 [2024-12-09 05:16:58.261621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:06.782 [2024-12-09 05:16:58.333426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:06.782 [2024-12-09 05:16:58.333492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:06.782 [2024-12-09 05:16:58.333523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:22:06.782 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:22:06.783 05:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62783 01:22:06.783 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62783 ']' 01:22:06.783 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62783 01:22:06.783 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:22:06.783 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:06.783 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62783 01:22:07.041 killing process with pid 62783 01:22:07.041 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:07.041 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:07.041 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62783' 01:22:07.041 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62783 01:22:07.041 [2024-12-09 05:16:58.421347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:07.041 05:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62783 01:22:07.041 [2024-12-09 05:16:58.434340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:07.975 05:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:22:07.975 01:22:07.975 real 0m5.492s 01:22:07.975 user 0m8.266s 01:22:07.975 sys 0m0.827s 01:22:07.975 ************************************ 01:22:07.975 END TEST raid_state_function_test_sb 01:22:07.975 ************************************ 01:22:07.975 05:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:07.975 05:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:07.975 05:16:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 01:22:07.975 05:16:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:22:07.975 05:16:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:07.975 05:16:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:07.975 ************************************ 01:22:07.975 START TEST raid_superblock_test 01:22:07.975 ************************************ 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63035 01:22:07.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63035 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63035 ']' 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:07.975 05:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:08.233 [2024-12-09 05:16:59.635068] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:08.233 [2024-12-09 05:16:59.635534] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63035 ] 01:22:08.233 [2024-12-09 05:16:59.819751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:08.490 [2024-12-09 05:16:59.937574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:08.748 [2024-12-09 05:17:00.142518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:08.748 [2024-12-09 05:17:00.142598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.313 malloc1 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.313 [2024-12-09 05:17:00.791814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:22:09.313 [2024-12-09 05:17:00.791897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:09.313 [2024-12-09 05:17:00.791931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:22:09.313 [2024-12-09 05:17:00.791947] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:09.313 [2024-12-09 05:17:00.794628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:09.313 [2024-12-09 05:17:00.794669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:22:09.313 pt1 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:22:09.313 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.314 malloc2 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.314 [2024-12-09 05:17:00.842703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:22:09.314 [2024-12-09 05:17:00.842763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:09.314 [2024-12-09 05:17:00.842803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:22:09.314 [2024-12-09 05:17:00.842817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:09.314 [2024-12-09 05:17:00.845316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:09.314 [2024-12-09 05:17:00.845372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:22:09.314 pt2 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.314 [2024-12-09 05:17:00.850768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:22:09.314 [2024-12-09 05:17:00.852985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:09.314 [2024-12-09 05:17:00.853528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:22:09.314 [2024-12-09 05:17:00.853558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:09.314 [2024-12-09 05:17:00.853842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:22:09.314 [2024-12-09 05:17:00.854048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:22:09.314 [2024-12-09 05:17:00.854071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:22:09.314 [2024-12-09 05:17:00.854224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:09.314 "name": "raid_bdev1", 01:22:09.314 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:09.314 "strip_size_kb": 0, 01:22:09.314 "state": "online", 01:22:09.314 "raid_level": "raid1", 01:22:09.314 "superblock": true, 01:22:09.314 "num_base_bdevs": 2, 01:22:09.314 "num_base_bdevs_discovered": 2, 01:22:09.314 "num_base_bdevs_operational": 2, 01:22:09.314 "base_bdevs_list": [ 01:22:09.314 { 01:22:09.314 "name": "pt1", 01:22:09.314 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:09.314 "is_configured": true, 01:22:09.314 "data_offset": 2048, 01:22:09.314 "data_size": 63488 01:22:09.314 }, 01:22:09.314 { 01:22:09.314 "name": "pt2", 01:22:09.314 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:09.314 "is_configured": true, 01:22:09.314 "data_offset": 2048, 01:22:09.314 "data_size": 63488 01:22:09.314 } 01:22:09.314 ] 01:22:09.314 }' 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:09.314 05:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:09.880 [2024-12-09 05:17:01.387093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.880 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:09.880 "name": "raid_bdev1", 01:22:09.880 "aliases": [ 01:22:09.880 "4efb0b97-e31e-4995-9ee2-42c59dab8a0a" 01:22:09.880 ], 01:22:09.880 "product_name": "Raid Volume", 01:22:09.880 "block_size": 512, 01:22:09.880 "num_blocks": 63488, 01:22:09.880 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:09.880 "assigned_rate_limits": { 01:22:09.880 "rw_ios_per_sec": 0, 01:22:09.880 "rw_mbytes_per_sec": 0, 01:22:09.880 "r_mbytes_per_sec": 0, 01:22:09.880 "w_mbytes_per_sec": 0 01:22:09.880 }, 01:22:09.880 "claimed": false, 01:22:09.880 "zoned": false, 01:22:09.880 "supported_io_types": { 01:22:09.880 "read": true, 01:22:09.880 "write": true, 01:22:09.880 "unmap": false, 01:22:09.880 "flush": false, 01:22:09.880 "reset": true, 01:22:09.881 "nvme_admin": false, 01:22:09.881 "nvme_io": false, 01:22:09.881 "nvme_io_md": false, 01:22:09.881 "write_zeroes": true, 01:22:09.881 "zcopy": false, 01:22:09.881 "get_zone_info": false, 01:22:09.881 "zone_management": false, 01:22:09.881 "zone_append": false, 01:22:09.881 "compare": false, 01:22:09.881 "compare_and_write": false, 01:22:09.881 "abort": false, 01:22:09.881 "seek_hole": false, 01:22:09.881 "seek_data": false, 01:22:09.881 "copy": false, 01:22:09.881 "nvme_iov_md": false 01:22:09.881 }, 01:22:09.881 "memory_domains": [ 01:22:09.881 { 01:22:09.881 "dma_device_id": "system", 01:22:09.881 "dma_device_type": 1 01:22:09.881 }, 01:22:09.881 { 01:22:09.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:09.881 "dma_device_type": 2 01:22:09.881 }, 01:22:09.881 { 01:22:09.881 "dma_device_id": "system", 01:22:09.881 "dma_device_type": 1 01:22:09.881 }, 01:22:09.881 { 01:22:09.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:09.881 "dma_device_type": 2 01:22:09.881 } 01:22:09.881 ], 01:22:09.881 "driver_specific": { 01:22:09.881 "raid": { 01:22:09.881 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:09.881 "strip_size_kb": 0, 01:22:09.881 "state": "online", 01:22:09.881 "raid_level": "raid1", 01:22:09.881 "superblock": true, 01:22:09.881 "num_base_bdevs": 2, 01:22:09.881 "num_base_bdevs_discovered": 2, 01:22:09.881 "num_base_bdevs_operational": 2, 01:22:09.881 "base_bdevs_list": [ 01:22:09.881 { 01:22:09.881 "name": "pt1", 01:22:09.881 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:09.881 "is_configured": true, 01:22:09.881 "data_offset": 2048, 01:22:09.881 "data_size": 63488 01:22:09.881 }, 01:22:09.881 { 01:22:09.881 "name": "pt2", 01:22:09.881 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:09.881 "is_configured": true, 01:22:09.881 "data_offset": 2048, 01:22:09.881 "data_size": 63488 01:22:09.881 } 01:22:09.881 ] 01:22:09.881 } 01:22:09.881 } 01:22:09.881 }' 01:22:09.881 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:09.881 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:22:09.881 pt2' 01:22:09.881 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.139 [2024-12-09 05:17:01.651164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4efb0b97-e31e-4995-9ee2-42c59dab8a0a 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4efb0b97-e31e-4995-9ee2-42c59dab8a0a ']' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.139 [2024-12-09 05:17:01.698872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:10.139 [2024-12-09 05:17:01.698897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:10.139 [2024-12-09 05:17:01.698971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:10.139 [2024-12-09 05:17:01.699048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:10.139 [2024-12-09 05:17:01.699067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:10.139 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.140 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.140 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.399 [2024-12-09 05:17:01.838930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:22:10.399 [2024-12-09 05:17:01.841128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:22:10.399 [2024-12-09 05:17:01.841201] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:22:10.399 [2024-12-09 05:17:01.841264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:22:10.399 [2024-12-09 05:17:01.841288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:10.399 [2024-12-09 05:17:01.841301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:22:10.399 request: 01:22:10.399 { 01:22:10.399 "name": "raid_bdev1", 01:22:10.399 "raid_level": "raid1", 01:22:10.399 "base_bdevs": [ 01:22:10.399 "malloc1", 01:22:10.399 "malloc2" 01:22:10.399 ], 01:22:10.399 "superblock": false, 01:22:10.399 "method": "bdev_raid_create", 01:22:10.399 "req_id": 1 01:22:10.399 } 01:22:10.399 Got JSON-RPC error response 01:22:10.399 response: 01:22:10.399 { 01:22:10.399 "code": -17, 01:22:10.399 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:22:10.399 } 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.399 [2024-12-09 05:17:01.906937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:22:10.399 [2024-12-09 05:17:01.906990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:10.399 [2024-12-09 05:17:01.907016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:22:10.399 [2024-12-09 05:17:01.907032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:10.399 [2024-12-09 05:17:01.909611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:10.399 [2024-12-09 05:17:01.909657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:22:10.399 [2024-12-09 05:17:01.909733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:22:10.399 [2024-12-09 05:17:01.909796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:22:10.399 pt1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:10.399 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:10.400 "name": "raid_bdev1", 01:22:10.400 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:10.400 "strip_size_kb": 0, 01:22:10.400 "state": "configuring", 01:22:10.400 "raid_level": "raid1", 01:22:10.400 "superblock": true, 01:22:10.400 "num_base_bdevs": 2, 01:22:10.400 "num_base_bdevs_discovered": 1, 01:22:10.400 "num_base_bdevs_operational": 2, 01:22:10.400 "base_bdevs_list": [ 01:22:10.400 { 01:22:10.400 "name": "pt1", 01:22:10.400 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:10.400 "is_configured": true, 01:22:10.400 "data_offset": 2048, 01:22:10.400 "data_size": 63488 01:22:10.400 }, 01:22:10.400 { 01:22:10.400 "name": null, 01:22:10.400 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:10.400 "is_configured": false, 01:22:10.400 "data_offset": 2048, 01:22:10.400 "data_size": 63488 01:22:10.400 } 01:22:10.400 ] 01:22:10.400 }' 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:10.400 05:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.967 [2024-12-09 05:17:02.443108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:22:10.967 [2024-12-09 05:17:02.443181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:10.967 [2024-12-09 05:17:02.443212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:22:10.967 [2024-12-09 05:17:02.443230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:10.967 [2024-12-09 05:17:02.443757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:10.967 [2024-12-09 05:17:02.443794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:22:10.967 [2024-12-09 05:17:02.443871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:22:10.967 [2024-12-09 05:17:02.443915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:10.967 [2024-12-09 05:17:02.444090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:10.967 [2024-12-09 05:17:02.444119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:10.967 [2024-12-09 05:17:02.444471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:22:10.967 [2024-12-09 05:17:02.444647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:10.967 [2024-12-09 05:17:02.444661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:22:10.967 [2024-12-09 05:17:02.444809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:10.967 pt2 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:10.967 "name": "raid_bdev1", 01:22:10.967 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:10.967 "strip_size_kb": 0, 01:22:10.967 "state": "online", 01:22:10.967 "raid_level": "raid1", 01:22:10.967 "superblock": true, 01:22:10.967 "num_base_bdevs": 2, 01:22:10.967 "num_base_bdevs_discovered": 2, 01:22:10.967 "num_base_bdevs_operational": 2, 01:22:10.967 "base_bdevs_list": [ 01:22:10.967 { 01:22:10.967 "name": "pt1", 01:22:10.967 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:10.967 "is_configured": true, 01:22:10.967 "data_offset": 2048, 01:22:10.967 "data_size": 63488 01:22:10.967 }, 01:22:10.967 { 01:22:10.967 "name": "pt2", 01:22:10.967 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:10.967 "is_configured": true, 01:22:10.967 "data_offset": 2048, 01:22:10.967 "data_size": 63488 01:22:10.967 } 01:22:10.967 ] 01:22:10.967 }' 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:10.967 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:11.546 05:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.546 [2024-12-09 05:17:02.987451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:11.546 "name": "raid_bdev1", 01:22:11.546 "aliases": [ 01:22:11.546 "4efb0b97-e31e-4995-9ee2-42c59dab8a0a" 01:22:11.546 ], 01:22:11.546 "product_name": "Raid Volume", 01:22:11.546 "block_size": 512, 01:22:11.546 "num_blocks": 63488, 01:22:11.546 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:11.546 "assigned_rate_limits": { 01:22:11.546 "rw_ios_per_sec": 0, 01:22:11.546 "rw_mbytes_per_sec": 0, 01:22:11.546 "r_mbytes_per_sec": 0, 01:22:11.546 "w_mbytes_per_sec": 0 01:22:11.546 }, 01:22:11.546 "claimed": false, 01:22:11.546 "zoned": false, 01:22:11.546 "supported_io_types": { 01:22:11.546 "read": true, 01:22:11.546 "write": true, 01:22:11.546 "unmap": false, 01:22:11.546 "flush": false, 01:22:11.546 "reset": true, 01:22:11.546 "nvme_admin": false, 01:22:11.546 "nvme_io": false, 01:22:11.546 "nvme_io_md": false, 01:22:11.546 "write_zeroes": true, 01:22:11.546 "zcopy": false, 01:22:11.546 "get_zone_info": false, 01:22:11.546 "zone_management": false, 01:22:11.546 "zone_append": false, 01:22:11.546 "compare": false, 01:22:11.546 "compare_and_write": false, 01:22:11.546 "abort": false, 01:22:11.546 "seek_hole": false, 01:22:11.546 "seek_data": false, 01:22:11.546 "copy": false, 01:22:11.546 "nvme_iov_md": false 01:22:11.546 }, 01:22:11.546 "memory_domains": [ 01:22:11.546 { 01:22:11.546 "dma_device_id": "system", 01:22:11.546 "dma_device_type": 1 01:22:11.546 }, 01:22:11.546 { 01:22:11.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:11.546 "dma_device_type": 2 01:22:11.546 }, 01:22:11.546 { 01:22:11.546 "dma_device_id": "system", 01:22:11.546 "dma_device_type": 1 01:22:11.546 }, 01:22:11.546 { 01:22:11.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:11.546 "dma_device_type": 2 01:22:11.546 } 01:22:11.546 ], 01:22:11.546 "driver_specific": { 01:22:11.546 "raid": { 01:22:11.546 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:11.546 "strip_size_kb": 0, 01:22:11.546 "state": "online", 01:22:11.546 "raid_level": "raid1", 01:22:11.546 "superblock": true, 01:22:11.546 "num_base_bdevs": 2, 01:22:11.546 "num_base_bdevs_discovered": 2, 01:22:11.546 "num_base_bdevs_operational": 2, 01:22:11.546 "base_bdevs_list": [ 01:22:11.546 { 01:22:11.546 "name": "pt1", 01:22:11.546 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:11.546 "is_configured": true, 01:22:11.546 "data_offset": 2048, 01:22:11.546 "data_size": 63488 01:22:11.546 }, 01:22:11.546 { 01:22:11.546 "name": "pt2", 01:22:11.546 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:11.546 "is_configured": true, 01:22:11.546 "data_offset": 2048, 01:22:11.546 "data_size": 63488 01:22:11.546 } 01:22:11.546 ] 01:22:11.546 } 01:22:11.546 } 01:22:11.546 }' 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:22:11.546 pt2' 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:11.546 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.805 [2024-12-09 05:17:03.259479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4efb0b97-e31e-4995-9ee2-42c59dab8a0a '!=' 4efb0b97-e31e-4995-9ee2-42c59dab8a0a ']' 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.805 [2024-12-09 05:17:03.311312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:11.805 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:11.805 "name": "raid_bdev1", 01:22:11.805 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:11.805 "strip_size_kb": 0, 01:22:11.805 "state": "online", 01:22:11.805 "raid_level": "raid1", 01:22:11.805 "superblock": true, 01:22:11.805 "num_base_bdevs": 2, 01:22:11.805 "num_base_bdevs_discovered": 1, 01:22:11.805 "num_base_bdevs_operational": 1, 01:22:11.805 "base_bdevs_list": [ 01:22:11.806 { 01:22:11.806 "name": null, 01:22:11.806 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:11.806 "is_configured": false, 01:22:11.806 "data_offset": 0, 01:22:11.806 "data_size": 63488 01:22:11.806 }, 01:22:11.806 { 01:22:11.806 "name": "pt2", 01:22:11.806 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:11.806 "is_configured": true, 01:22:11.806 "data_offset": 2048, 01:22:11.806 "data_size": 63488 01:22:11.806 } 01:22:11.806 ] 01:22:11.806 }' 01:22:11.806 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:11.806 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.374 [2024-12-09 05:17:03.843384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:12.374 [2024-12-09 05:17:03.843409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:12.374 [2024-12-09 05:17:03.843468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:12.374 [2024-12-09 05:17:03.843516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:12.374 [2024-12-09 05:17:03.843536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.374 [2024-12-09 05:17:03.915402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:22:12.374 [2024-12-09 05:17:03.915465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:12.374 [2024-12-09 05:17:03.915489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:22:12.374 [2024-12-09 05:17:03.915506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:12.374 [2024-12-09 05:17:03.918276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:12.374 [2024-12-09 05:17:03.918322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:22:12.374 [2024-12-09 05:17:03.918420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:22:12.374 [2024-12-09 05:17:03.918480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:12.374 [2024-12-09 05:17:03.918591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:22:12.374 [2024-12-09 05:17:03.918612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:12.374 [2024-12-09 05:17:03.918923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:22:12.374 [2024-12-09 05:17:03.919135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:22:12.374 [2024-12-09 05:17:03.919182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:22:12.374 [2024-12-09 05:17:03.919391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:12.374 pt2 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:12.374 "name": "raid_bdev1", 01:22:12.374 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:12.374 "strip_size_kb": 0, 01:22:12.374 "state": "online", 01:22:12.374 "raid_level": "raid1", 01:22:12.374 "superblock": true, 01:22:12.374 "num_base_bdevs": 2, 01:22:12.374 "num_base_bdevs_discovered": 1, 01:22:12.374 "num_base_bdevs_operational": 1, 01:22:12.374 "base_bdevs_list": [ 01:22:12.374 { 01:22:12.374 "name": null, 01:22:12.374 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:12.374 "is_configured": false, 01:22:12.374 "data_offset": 2048, 01:22:12.374 "data_size": 63488 01:22:12.374 }, 01:22:12.374 { 01:22:12.374 "name": "pt2", 01:22:12.374 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:12.374 "is_configured": true, 01:22:12.374 "data_offset": 2048, 01:22:12.374 "data_size": 63488 01:22:12.374 } 01:22:12.374 ] 01:22:12.374 }' 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:12.374 05:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.942 [2024-12-09 05:17:04.463545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:12.942 [2024-12-09 05:17:04.463576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:12.942 [2024-12-09 05:17:04.463637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:12.942 [2024-12-09 05:17:04.463693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:12.942 [2024-12-09 05:17:04.463707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:12.942 [2024-12-09 05:17:04.547586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:22:12.942 [2024-12-09 05:17:04.547641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:12.942 [2024-12-09 05:17:04.547668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 01:22:12.942 [2024-12-09 05:17:04.547682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:12.942 [2024-12-09 05:17:04.550329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:12.942 [2024-12-09 05:17:04.550541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:22:12.942 [2024-12-09 05:17:04.550648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:22:12.942 [2024-12-09 05:17:04.550700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:22:12.942 [2024-12-09 05:17:04.550874] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:22:12.942 [2024-12-09 05:17:04.550891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:12.942 [2024-12-09 05:17:04.550911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:22:12.942 [2024-12-09 05:17:04.550985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:12.942 [2024-12-09 05:17:04.551091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:22:12.942 [2024-12-09 05:17:04.551106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:12.942 [2024-12-09 05:17:04.551373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:22:12.942 [2024-12-09 05:17:04.551554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:22:12.942 [2024-12-09 05:17:04.551575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:22:12.942 [2024-12-09 05:17:04.551772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:12.942 pt1 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:12.942 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:13.201 "name": "raid_bdev1", 01:22:13.201 "uuid": "4efb0b97-e31e-4995-9ee2-42c59dab8a0a", 01:22:13.201 "strip_size_kb": 0, 01:22:13.201 "state": "online", 01:22:13.201 "raid_level": "raid1", 01:22:13.201 "superblock": true, 01:22:13.201 "num_base_bdevs": 2, 01:22:13.201 "num_base_bdevs_discovered": 1, 01:22:13.201 "num_base_bdevs_operational": 1, 01:22:13.201 "base_bdevs_list": [ 01:22:13.201 { 01:22:13.201 "name": null, 01:22:13.201 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:13.201 "is_configured": false, 01:22:13.201 "data_offset": 2048, 01:22:13.201 "data_size": 63488 01:22:13.201 }, 01:22:13.201 { 01:22:13.201 "name": "pt2", 01:22:13.201 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:13.201 "is_configured": true, 01:22:13.201 "data_offset": 2048, 01:22:13.201 "data_size": 63488 01:22:13.201 } 01:22:13.201 ] 01:22:13.201 }' 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:13.201 05:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:22:13.768 [2024-12-09 05:17:05.156073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4efb0b97-e31e-4995-9ee2-42c59dab8a0a '!=' 4efb0b97-e31e-4995-9ee2-42c59dab8a0a ']' 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63035 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63035 ']' 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63035 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63035 01:22:13.768 killing process with pid 63035 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:13.768 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:13.769 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63035' 01:22:13.769 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63035 01:22:13.769 [2024-12-09 05:17:05.245336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:13.769 [2024-12-09 05:17:05.245433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:13.769 [2024-12-09 05:17:05.245489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:13.769 [2024-12-09 05:17:05.245523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:22:13.769 05:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63035 01:22:14.027 [2024-12-09 05:17:05.422830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:15.403 05:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:22:15.403 ************************************ 01:22:15.403 END TEST raid_superblock_test 01:22:15.403 ************************************ 01:22:15.403 01:22:15.403 real 0m7.108s 01:22:15.403 user 0m11.248s 01:22:15.403 sys 0m0.979s 01:22:15.403 05:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:15.403 05:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:15.403 05:17:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 01:22:15.403 05:17:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:22:15.403 05:17:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:15.403 05:17:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:15.403 ************************************ 01:22:15.403 START TEST raid_read_error_test 01:22:15.403 ************************************ 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oeXYo797JQ 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63376 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63376 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63376 ']' 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:15.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:15.403 05:17:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:15.403 [2024-12-09 05:17:06.823320] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:15.403 [2024-12-09 05:17:06.823535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63376 ] 01:22:15.403 [2024-12-09 05:17:07.008758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:15.663 [2024-12-09 05:17:07.139644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:15.922 [2024-12-09 05:17:07.356099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:15.922 [2024-12-09 05:17:07.356146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.503 BaseBdev1_malloc 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.503 true 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.503 [2024-12-09 05:17:07.871789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:22:16.503 [2024-12-09 05:17:07.871866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:16.503 [2024-12-09 05:17:07.871895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:22:16.503 [2024-12-09 05:17:07.871911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:16.503 [2024-12-09 05:17:07.874853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:16.503 [2024-12-09 05:17:07.874899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:22:16.503 BaseBdev1 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.503 BaseBdev2_malloc 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.503 true 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.503 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.504 [2024-12-09 05:17:07.928837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:22:16.504 [2024-12-09 05:17:07.928903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:16.504 [2024-12-09 05:17:07.928928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:22:16.504 [2024-12-09 05:17:07.928943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:16.504 [2024-12-09 05:17:07.931863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:16.504 [2024-12-09 05:17:07.931907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:22:16.504 BaseBdev2 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.504 [2024-12-09 05:17:07.936891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:16.504 [2024-12-09 05:17:07.939449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:16.504 [2024-12-09 05:17:07.939696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:16.504 [2024-12-09 05:17:07.939718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:16.504 [2024-12-09 05:17:07.939978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:22:16.504 [2024-12-09 05:17:07.940196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:16.504 [2024-12-09 05:17:07.940212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:22:16.504 [2024-12-09 05:17:07.940418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:16.504 "name": "raid_bdev1", 01:22:16.504 "uuid": "69e41f11-1930-4dad-a24d-f08bcd1841f3", 01:22:16.504 "strip_size_kb": 0, 01:22:16.504 "state": "online", 01:22:16.504 "raid_level": "raid1", 01:22:16.504 "superblock": true, 01:22:16.504 "num_base_bdevs": 2, 01:22:16.504 "num_base_bdevs_discovered": 2, 01:22:16.504 "num_base_bdevs_operational": 2, 01:22:16.504 "base_bdevs_list": [ 01:22:16.504 { 01:22:16.504 "name": "BaseBdev1", 01:22:16.504 "uuid": "2cc2faa1-e542-588d-be21-f45bc9aed499", 01:22:16.504 "is_configured": true, 01:22:16.504 "data_offset": 2048, 01:22:16.504 "data_size": 63488 01:22:16.504 }, 01:22:16.504 { 01:22:16.504 "name": "BaseBdev2", 01:22:16.504 "uuid": "583c5242-7176-57a4-9f4d-0a3c9d0bd6b5", 01:22:16.504 "is_configured": true, 01:22:16.504 "data_offset": 2048, 01:22:16.504 "data_size": 63488 01:22:16.504 } 01:22:16.504 ] 01:22:16.504 }' 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:16.504 05:17:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:17.071 05:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:22:17.071 05:17:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:22:17.071 [2024-12-09 05:17:08.546360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:18.007 "name": "raid_bdev1", 01:22:18.007 "uuid": "69e41f11-1930-4dad-a24d-f08bcd1841f3", 01:22:18.007 "strip_size_kb": 0, 01:22:18.007 "state": "online", 01:22:18.007 "raid_level": "raid1", 01:22:18.007 "superblock": true, 01:22:18.007 "num_base_bdevs": 2, 01:22:18.007 "num_base_bdevs_discovered": 2, 01:22:18.007 "num_base_bdevs_operational": 2, 01:22:18.007 "base_bdevs_list": [ 01:22:18.007 { 01:22:18.007 "name": "BaseBdev1", 01:22:18.007 "uuid": "2cc2faa1-e542-588d-be21-f45bc9aed499", 01:22:18.007 "is_configured": true, 01:22:18.007 "data_offset": 2048, 01:22:18.007 "data_size": 63488 01:22:18.007 }, 01:22:18.007 { 01:22:18.007 "name": "BaseBdev2", 01:22:18.007 "uuid": "583c5242-7176-57a4-9f4d-0a3c9d0bd6b5", 01:22:18.007 "is_configured": true, 01:22:18.007 "data_offset": 2048, 01:22:18.007 "data_size": 63488 01:22:18.007 } 01:22:18.007 ] 01:22:18.007 }' 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:18.007 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:18.574 [2024-12-09 05:17:09.988140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:18.574 [2024-12-09 05:17:09.988197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:18.574 [2024-12-09 05:17:09.991323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:18.574 [2024-12-09 05:17:09.991603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:18.574 [2024-12-09 05:17:09.991728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:18.574 [2024-12-09 05:17:09.991750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:22:18.574 { 01:22:18.574 "results": [ 01:22:18.574 { 01:22:18.574 "job": "raid_bdev1", 01:22:18.574 "core_mask": "0x1", 01:22:18.574 "workload": "randrw", 01:22:18.574 "percentage": 50, 01:22:18.574 "status": "finished", 01:22:18.574 "queue_depth": 1, 01:22:18.574 "io_size": 131072, 01:22:18.574 "runtime": 1.439403, 01:22:18.574 "iops": 13175.601273583563, 01:22:18.574 "mibps": 1646.9501591979454, 01:22:18.574 "io_failed": 0, 01:22:18.574 "io_timeout": 0, 01:22:18.574 "avg_latency_us": 72.00149864583082, 01:22:18.574 "min_latency_us": 37.70181818181818, 01:22:18.574 "max_latency_us": 1660.7418181818182 01:22:18.574 } 01:22:18.574 ], 01:22:18.574 "core_count": 1 01:22:18.574 } 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63376 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63376 ']' 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63376 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:22:18.574 05:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:18.574 05:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63376 01:22:18.574 killing process with pid 63376 01:22:18.574 05:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:18.574 05:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:18.574 05:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63376' 01:22:18.574 05:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63376 01:22:18.574 05:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63376 01:22:18.574 [2024-12-09 05:17:10.027424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:18.574 [2024-12-09 05:17:10.144456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oeXYo797JQ 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 01:22:19.948 ************************************ 01:22:19.948 END TEST raid_read_error_test 01:22:19.948 ************************************ 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 01:22:19.948 01:22:19.948 real 0m4.596s 01:22:19.948 user 0m5.684s 01:22:19.948 sys 0m0.622s 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:19.948 05:17:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:19.948 05:17:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 01:22:19.948 05:17:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:22:19.948 05:17:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:19.948 05:17:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:19.948 ************************************ 01:22:19.948 START TEST raid_write_error_test 01:22:19.948 ************************************ 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1LRCY0i0IT 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63522 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63522 01:22:19.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63522 ']' 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:19.948 05:17:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:19.948 [2024-12-09 05:17:11.444263] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:19.948 [2024-12-09 05:17:11.444424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63522 ] 01:22:20.207 [2024-12-09 05:17:11.613719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:20.207 [2024-12-09 05:17:11.738591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:20.466 [2024-12-09 05:17:11.945393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:20.466 [2024-12-09 05:17:11.945461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.034 BaseBdev1_malloc 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.034 true 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.034 [2024-12-09 05:17:12.504667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:22:21.034 [2024-12-09 05:17:12.504740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:21.034 [2024-12-09 05:17:12.504769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:22:21.034 [2024-12-09 05:17:12.504786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:21.034 [2024-12-09 05:17:12.507545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:21.034 [2024-12-09 05:17:12.507592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:22:21.034 BaseBdev1 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:21.034 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.035 BaseBdev2_malloc 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.035 true 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.035 [2024-12-09 05:17:12.560957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:22:21.035 [2024-12-09 05:17:12.561033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:21.035 [2024-12-09 05:17:12.561058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:22:21.035 [2024-12-09 05:17:12.561075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:21.035 [2024-12-09 05:17:12.564008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:21.035 [2024-12-09 05:17:12.564234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:22:21.035 BaseBdev2 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.035 [2024-12-09 05:17:12.569024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:21.035 [2024-12-09 05:17:12.571678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:21.035 [2024-12-09 05:17:12.572101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:21.035 [2024-12-09 05:17:12.572221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:22:21.035 [2024-12-09 05:17:12.572588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:22:21.035 [2024-12-09 05:17:12.572949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:21.035 [2024-12-09 05:17:12.573082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:22:21.035 [2024-12-09 05:17:12.573484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:21.035 "name": "raid_bdev1", 01:22:21.035 "uuid": "5505e60a-8223-4095-afd5-2b0f8c4f8879", 01:22:21.035 "strip_size_kb": 0, 01:22:21.035 "state": "online", 01:22:21.035 "raid_level": "raid1", 01:22:21.035 "superblock": true, 01:22:21.035 "num_base_bdevs": 2, 01:22:21.035 "num_base_bdevs_discovered": 2, 01:22:21.035 "num_base_bdevs_operational": 2, 01:22:21.035 "base_bdevs_list": [ 01:22:21.035 { 01:22:21.035 "name": "BaseBdev1", 01:22:21.035 "uuid": "8d842d2b-de4d-572f-b8da-f0fcccdc41f8", 01:22:21.035 "is_configured": true, 01:22:21.035 "data_offset": 2048, 01:22:21.035 "data_size": 63488 01:22:21.035 }, 01:22:21.035 { 01:22:21.035 "name": "BaseBdev2", 01:22:21.035 "uuid": "4a99f7dd-4ece-53c2-ad49-0cadd6bffc8b", 01:22:21.035 "is_configured": true, 01:22:21.035 "data_offset": 2048, 01:22:21.035 "data_size": 63488 01:22:21.035 } 01:22:21.035 ] 01:22:21.035 }' 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:21.035 05:17:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:21.602 05:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:22:21.602 05:17:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:22:21.602 [2024-12-09 05:17:13.190866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:22.539 [2024-12-09 05:17:14.063986] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 01:22:22.539 [2024-12-09 05:17:14.064063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:22.539 [2024-12-09 05:17:14.064306] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:22.539 "name": "raid_bdev1", 01:22:22.539 "uuid": "5505e60a-8223-4095-afd5-2b0f8c4f8879", 01:22:22.539 "strip_size_kb": 0, 01:22:22.539 "state": "online", 01:22:22.539 "raid_level": "raid1", 01:22:22.539 "superblock": true, 01:22:22.539 "num_base_bdevs": 2, 01:22:22.539 "num_base_bdevs_discovered": 1, 01:22:22.539 "num_base_bdevs_operational": 1, 01:22:22.539 "base_bdevs_list": [ 01:22:22.539 { 01:22:22.539 "name": null, 01:22:22.539 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:22.539 "is_configured": false, 01:22:22.539 "data_offset": 0, 01:22:22.539 "data_size": 63488 01:22:22.539 }, 01:22:22.539 { 01:22:22.539 "name": "BaseBdev2", 01:22:22.539 "uuid": "4a99f7dd-4ece-53c2-ad49-0cadd6bffc8b", 01:22:22.539 "is_configured": true, 01:22:22.539 "data_offset": 2048, 01:22:22.539 "data_size": 63488 01:22:22.539 } 01:22:22.539 ] 01:22:22.539 }' 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:22.539 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:23.107 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:23.108 [2024-12-09 05:17:14.603284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:23.108 [2024-12-09 05:17:14.603344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:23.108 { 01:22:23.108 "results": [ 01:22:23.108 { 01:22:23.108 "job": "raid_bdev1", 01:22:23.108 "core_mask": "0x1", 01:22:23.108 "workload": "randrw", 01:22:23.108 "percentage": 50, 01:22:23.108 "status": "finished", 01:22:23.108 "queue_depth": 1, 01:22:23.108 "io_size": 131072, 01:22:23.108 "runtime": 1.410249, 01:22:23.108 "iops": 14552.749195354863, 01:22:23.108 "mibps": 1819.093649419358, 01:22:23.108 "io_failed": 0, 01:22:23.108 "io_timeout": 0, 01:22:23.108 "avg_latency_us": 64.52511904603705, 01:22:23.108 "min_latency_us": 35.60727272727273, 01:22:23.108 "max_latency_us": 1578.8218181818181 01:22:23.108 } 01:22:23.108 ], 01:22:23.108 "core_count": 1 01:22:23.108 } 01:22:23.108 [2024-12-09 05:17:14.607049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:23.108 [2024-12-09 05:17:14.607119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:23.108 [2024-12-09 05:17:14.607328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:23.108 [2024-12-09 05:17:14.607364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63522 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63522 ']' 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63522 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63522 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63522' 01:22:23.108 killing process with pid 63522 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63522 01:22:23.108 [2024-12-09 05:17:14.653278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:23.108 05:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63522 01:22:23.366 [2024-12-09 05:17:14.777778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1LRCY0i0IT 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 01:22:24.741 ************************************ 01:22:24.741 END TEST raid_write_error_test 01:22:24.741 ************************************ 01:22:24.741 01:22:24.741 real 0m4.801s 01:22:24.741 user 0m5.876s 01:22:24.741 sys 0m0.634s 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:24.741 05:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:24.741 05:17:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 01:22:24.741 05:17:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:22:24.741 05:17:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 01:22:24.741 05:17:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:22:24.741 05:17:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:24.741 05:17:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:24.741 ************************************ 01:22:24.741 START TEST raid_state_function_test 01:22:24.741 ************************************ 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:22:24.741 Process raid pid: 63671 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63671 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63671' 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63671 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63671 ']' 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:24.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:24.741 05:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:24.741 [2024-12-09 05:17:16.329288] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:24.741 [2024-12-09 05:17:16.329526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:22:25.000 [2024-12-09 05:17:16.514852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:25.258 [2024-12-09 05:17:16.656039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:25.516 [2024-12-09 05:17:16.883486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:25.516 [2024-12-09 05:17:16.883557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:25.775 [2024-12-09 05:17:17.328897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:25.775 [2024-12-09 05:17:17.328962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:25.775 [2024-12-09 05:17:17.328979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:25.775 [2024-12-09 05:17:17.328996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:25.775 [2024-12-09 05:17:17.329006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:22:25.775 [2024-12-09 05:17:17.329021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:25.775 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.045 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:26.045 "name": "Existed_Raid", 01:22:26.045 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.045 "strip_size_kb": 64, 01:22:26.045 "state": "configuring", 01:22:26.045 "raid_level": "raid0", 01:22:26.045 "superblock": false, 01:22:26.045 "num_base_bdevs": 3, 01:22:26.045 "num_base_bdevs_discovered": 0, 01:22:26.045 "num_base_bdevs_operational": 3, 01:22:26.045 "base_bdevs_list": [ 01:22:26.045 { 01:22:26.045 "name": "BaseBdev1", 01:22:26.045 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.045 "is_configured": false, 01:22:26.045 "data_offset": 0, 01:22:26.045 "data_size": 0 01:22:26.045 }, 01:22:26.045 { 01:22:26.045 "name": "BaseBdev2", 01:22:26.045 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.045 "is_configured": false, 01:22:26.045 "data_offset": 0, 01:22:26.045 "data_size": 0 01:22:26.045 }, 01:22:26.045 { 01:22:26.045 "name": "BaseBdev3", 01:22:26.045 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.045 "is_configured": false, 01:22:26.045 "data_offset": 0, 01:22:26.045 "data_size": 0 01:22:26.045 } 01:22:26.045 ] 01:22:26.045 }' 01:22:26.045 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:26.045 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.321 [2024-12-09 05:17:17.860980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:26.321 [2024-12-09 05:17:17.861028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.321 [2024-12-09 05:17:17.868979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:26.321 [2024-12-09 05:17:17.869030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:26.321 [2024-12-09 05:17:17.869045] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:26.321 [2024-12-09 05:17:17.869061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:26.321 [2024-12-09 05:17:17.869070] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:22:26.321 [2024-12-09 05:17:17.869085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.321 [2024-12-09 05:17:17.917477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:26.321 BaseBdev1 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.321 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.580 [ 01:22:26.580 { 01:22:26.580 "name": "BaseBdev1", 01:22:26.580 "aliases": [ 01:22:26.580 "07b2aa95-e69a-4203-a459-146a2ad8eb55" 01:22:26.580 ], 01:22:26.580 "product_name": "Malloc disk", 01:22:26.580 "block_size": 512, 01:22:26.580 "num_blocks": 65536, 01:22:26.580 "uuid": "07b2aa95-e69a-4203-a459-146a2ad8eb55", 01:22:26.580 "assigned_rate_limits": { 01:22:26.580 "rw_ios_per_sec": 0, 01:22:26.580 "rw_mbytes_per_sec": 0, 01:22:26.580 "r_mbytes_per_sec": 0, 01:22:26.580 "w_mbytes_per_sec": 0 01:22:26.580 }, 01:22:26.580 "claimed": true, 01:22:26.580 "claim_type": "exclusive_write", 01:22:26.580 "zoned": false, 01:22:26.580 "supported_io_types": { 01:22:26.580 "read": true, 01:22:26.580 "write": true, 01:22:26.580 "unmap": true, 01:22:26.580 "flush": true, 01:22:26.580 "reset": true, 01:22:26.580 "nvme_admin": false, 01:22:26.580 "nvme_io": false, 01:22:26.580 "nvme_io_md": false, 01:22:26.580 "write_zeroes": true, 01:22:26.580 "zcopy": true, 01:22:26.580 "get_zone_info": false, 01:22:26.580 "zone_management": false, 01:22:26.580 "zone_append": false, 01:22:26.580 "compare": false, 01:22:26.580 "compare_and_write": false, 01:22:26.580 "abort": true, 01:22:26.580 "seek_hole": false, 01:22:26.580 "seek_data": false, 01:22:26.580 "copy": true, 01:22:26.580 "nvme_iov_md": false 01:22:26.580 }, 01:22:26.580 "memory_domains": [ 01:22:26.580 { 01:22:26.580 "dma_device_id": "system", 01:22:26.580 "dma_device_type": 1 01:22:26.580 }, 01:22:26.580 { 01:22:26.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:26.580 "dma_device_type": 2 01:22:26.580 } 01:22:26.580 ], 01:22:26.580 "driver_specific": {} 01:22:26.580 } 01:22:26.580 ] 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:26.580 05:17:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:26.580 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:26.580 "name": "Existed_Raid", 01:22:26.580 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.580 "strip_size_kb": 64, 01:22:26.580 "state": "configuring", 01:22:26.580 "raid_level": "raid0", 01:22:26.580 "superblock": false, 01:22:26.580 "num_base_bdevs": 3, 01:22:26.580 "num_base_bdevs_discovered": 1, 01:22:26.580 "num_base_bdevs_operational": 3, 01:22:26.580 "base_bdevs_list": [ 01:22:26.580 { 01:22:26.580 "name": "BaseBdev1", 01:22:26.580 "uuid": "07b2aa95-e69a-4203-a459-146a2ad8eb55", 01:22:26.580 "is_configured": true, 01:22:26.580 "data_offset": 0, 01:22:26.580 "data_size": 65536 01:22:26.580 }, 01:22:26.580 { 01:22:26.580 "name": "BaseBdev2", 01:22:26.580 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.580 "is_configured": false, 01:22:26.580 "data_offset": 0, 01:22:26.580 "data_size": 0 01:22:26.580 }, 01:22:26.580 { 01:22:26.580 "name": "BaseBdev3", 01:22:26.580 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:26.580 "is_configured": false, 01:22:26.580 "data_offset": 0, 01:22:26.580 "data_size": 0 01:22:26.580 } 01:22:26.580 ] 01:22:26.580 }' 01:22:26.580 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:26.580 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.147 [2024-12-09 05:17:18.485784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:27.147 [2024-12-09 05:17:18.485852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.147 [2024-12-09 05:17:18.493811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:27.147 [2024-12-09 05:17:18.496776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:27.147 [2024-12-09 05:17:18.496831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:27.147 [2024-12-09 05:17:18.496848] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:22:27.147 [2024-12-09 05:17:18.496863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:27.147 "name": "Existed_Raid", 01:22:27.147 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:27.147 "strip_size_kb": 64, 01:22:27.147 "state": "configuring", 01:22:27.147 "raid_level": "raid0", 01:22:27.147 "superblock": false, 01:22:27.147 "num_base_bdevs": 3, 01:22:27.147 "num_base_bdevs_discovered": 1, 01:22:27.147 "num_base_bdevs_operational": 3, 01:22:27.147 "base_bdevs_list": [ 01:22:27.147 { 01:22:27.147 "name": "BaseBdev1", 01:22:27.147 "uuid": "07b2aa95-e69a-4203-a459-146a2ad8eb55", 01:22:27.147 "is_configured": true, 01:22:27.147 "data_offset": 0, 01:22:27.147 "data_size": 65536 01:22:27.147 }, 01:22:27.147 { 01:22:27.147 "name": "BaseBdev2", 01:22:27.147 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:27.147 "is_configured": false, 01:22:27.147 "data_offset": 0, 01:22:27.147 "data_size": 0 01:22:27.147 }, 01:22:27.147 { 01:22:27.147 "name": "BaseBdev3", 01:22:27.147 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:27.147 "is_configured": false, 01:22:27.147 "data_offset": 0, 01:22:27.147 "data_size": 0 01:22:27.147 } 01:22:27.147 ] 01:22:27.147 }' 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:27.147 05:17:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.406 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:22:27.406 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.406 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.664 [2024-12-09 05:17:19.053730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:27.664 BaseBdev2 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.664 [ 01:22:27.664 { 01:22:27.664 "name": "BaseBdev2", 01:22:27.664 "aliases": [ 01:22:27.664 "aefed887-3cb1-47e8-8803-3068cae78dd5" 01:22:27.664 ], 01:22:27.664 "product_name": "Malloc disk", 01:22:27.664 "block_size": 512, 01:22:27.664 "num_blocks": 65536, 01:22:27.664 "uuid": "aefed887-3cb1-47e8-8803-3068cae78dd5", 01:22:27.664 "assigned_rate_limits": { 01:22:27.664 "rw_ios_per_sec": 0, 01:22:27.664 "rw_mbytes_per_sec": 0, 01:22:27.664 "r_mbytes_per_sec": 0, 01:22:27.664 "w_mbytes_per_sec": 0 01:22:27.664 }, 01:22:27.664 "claimed": true, 01:22:27.664 "claim_type": "exclusive_write", 01:22:27.664 "zoned": false, 01:22:27.664 "supported_io_types": { 01:22:27.664 "read": true, 01:22:27.664 "write": true, 01:22:27.664 "unmap": true, 01:22:27.664 "flush": true, 01:22:27.664 "reset": true, 01:22:27.664 "nvme_admin": false, 01:22:27.664 "nvme_io": false, 01:22:27.664 "nvme_io_md": false, 01:22:27.664 "write_zeroes": true, 01:22:27.664 "zcopy": true, 01:22:27.664 "get_zone_info": false, 01:22:27.664 "zone_management": false, 01:22:27.664 "zone_append": false, 01:22:27.664 "compare": false, 01:22:27.664 "compare_and_write": false, 01:22:27.664 "abort": true, 01:22:27.664 "seek_hole": false, 01:22:27.664 "seek_data": false, 01:22:27.664 "copy": true, 01:22:27.664 "nvme_iov_md": false 01:22:27.664 }, 01:22:27.664 "memory_domains": [ 01:22:27.664 { 01:22:27.664 "dma_device_id": "system", 01:22:27.664 "dma_device_type": 1 01:22:27.664 }, 01:22:27.664 { 01:22:27.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:27.664 "dma_device_type": 2 01:22:27.664 } 01:22:27.664 ], 01:22:27.664 "driver_specific": {} 01:22:27.664 } 01:22:27.664 ] 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.664 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:27.664 "name": "Existed_Raid", 01:22:27.664 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:27.664 "strip_size_kb": 64, 01:22:27.664 "state": "configuring", 01:22:27.664 "raid_level": "raid0", 01:22:27.664 "superblock": false, 01:22:27.664 "num_base_bdevs": 3, 01:22:27.664 "num_base_bdevs_discovered": 2, 01:22:27.664 "num_base_bdevs_operational": 3, 01:22:27.664 "base_bdevs_list": [ 01:22:27.664 { 01:22:27.664 "name": "BaseBdev1", 01:22:27.664 "uuid": "07b2aa95-e69a-4203-a459-146a2ad8eb55", 01:22:27.664 "is_configured": true, 01:22:27.664 "data_offset": 0, 01:22:27.664 "data_size": 65536 01:22:27.664 }, 01:22:27.664 { 01:22:27.664 "name": "BaseBdev2", 01:22:27.664 "uuid": "aefed887-3cb1-47e8-8803-3068cae78dd5", 01:22:27.664 "is_configured": true, 01:22:27.664 "data_offset": 0, 01:22:27.664 "data_size": 65536 01:22:27.664 }, 01:22:27.664 { 01:22:27.664 "name": "BaseBdev3", 01:22:27.664 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:27.664 "is_configured": false, 01:22:27.664 "data_offset": 0, 01:22:27.664 "data_size": 0 01:22:27.664 } 01:22:27.664 ] 01:22:27.664 }' 01:22:27.665 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:27.665 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.230 [2024-12-09 05:17:19.653186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:28.230 [2024-12-09 05:17:19.653251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:28.230 [2024-12-09 05:17:19.653274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:22:28.230 [2024-12-09 05:17:19.653701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:22:28.230 [2024-12-09 05:17:19.653996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:28.230 [2024-12-09 05:17:19.654034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:22:28.230 [2024-12-09 05:17:19.654397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:28.230 BaseBdev3 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.230 [ 01:22:28.230 { 01:22:28.230 "name": "BaseBdev3", 01:22:28.230 "aliases": [ 01:22:28.230 "caa47576-3310-4fbf-8e51-5b5212c73ce9" 01:22:28.230 ], 01:22:28.230 "product_name": "Malloc disk", 01:22:28.230 "block_size": 512, 01:22:28.230 "num_blocks": 65536, 01:22:28.230 "uuid": "caa47576-3310-4fbf-8e51-5b5212c73ce9", 01:22:28.230 "assigned_rate_limits": { 01:22:28.230 "rw_ios_per_sec": 0, 01:22:28.230 "rw_mbytes_per_sec": 0, 01:22:28.230 "r_mbytes_per_sec": 0, 01:22:28.230 "w_mbytes_per_sec": 0 01:22:28.230 }, 01:22:28.230 "claimed": true, 01:22:28.230 "claim_type": "exclusive_write", 01:22:28.230 "zoned": false, 01:22:28.230 "supported_io_types": { 01:22:28.230 "read": true, 01:22:28.230 "write": true, 01:22:28.230 "unmap": true, 01:22:28.230 "flush": true, 01:22:28.230 "reset": true, 01:22:28.230 "nvme_admin": false, 01:22:28.230 "nvme_io": false, 01:22:28.230 "nvme_io_md": false, 01:22:28.230 "write_zeroes": true, 01:22:28.230 "zcopy": true, 01:22:28.230 "get_zone_info": false, 01:22:28.230 "zone_management": false, 01:22:28.230 "zone_append": false, 01:22:28.230 "compare": false, 01:22:28.230 "compare_and_write": false, 01:22:28.230 "abort": true, 01:22:28.230 "seek_hole": false, 01:22:28.230 "seek_data": false, 01:22:28.230 "copy": true, 01:22:28.230 "nvme_iov_md": false 01:22:28.230 }, 01:22:28.230 "memory_domains": [ 01:22:28.230 { 01:22:28.230 "dma_device_id": "system", 01:22:28.230 "dma_device_type": 1 01:22:28.230 }, 01:22:28.230 { 01:22:28.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:28.230 "dma_device_type": 2 01:22:28.230 } 01:22:28.230 ], 01:22:28.230 "driver_specific": {} 01:22:28.230 } 01:22:28.230 ] 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:28.230 "name": "Existed_Raid", 01:22:28.230 "uuid": "cf37ce35-68bd-4ad5-874c-5a925fd04170", 01:22:28.230 "strip_size_kb": 64, 01:22:28.230 "state": "online", 01:22:28.230 "raid_level": "raid0", 01:22:28.230 "superblock": false, 01:22:28.230 "num_base_bdevs": 3, 01:22:28.230 "num_base_bdevs_discovered": 3, 01:22:28.230 "num_base_bdevs_operational": 3, 01:22:28.230 "base_bdevs_list": [ 01:22:28.230 { 01:22:28.230 "name": "BaseBdev1", 01:22:28.230 "uuid": "07b2aa95-e69a-4203-a459-146a2ad8eb55", 01:22:28.230 "is_configured": true, 01:22:28.230 "data_offset": 0, 01:22:28.230 "data_size": 65536 01:22:28.230 }, 01:22:28.230 { 01:22:28.230 "name": "BaseBdev2", 01:22:28.230 "uuid": "aefed887-3cb1-47e8-8803-3068cae78dd5", 01:22:28.230 "is_configured": true, 01:22:28.230 "data_offset": 0, 01:22:28.230 "data_size": 65536 01:22:28.230 }, 01:22:28.230 { 01:22:28.230 "name": "BaseBdev3", 01:22:28.230 "uuid": "caa47576-3310-4fbf-8e51-5b5212c73ce9", 01:22:28.230 "is_configured": true, 01:22:28.230 "data_offset": 0, 01:22:28.230 "data_size": 65536 01:22:28.230 } 01:22:28.230 ] 01:22:28.230 }' 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:28.230 05:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.795 [2024-12-09 05:17:20.213877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.795 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:28.795 "name": "Existed_Raid", 01:22:28.795 "aliases": [ 01:22:28.795 "cf37ce35-68bd-4ad5-874c-5a925fd04170" 01:22:28.795 ], 01:22:28.795 "product_name": "Raid Volume", 01:22:28.795 "block_size": 512, 01:22:28.795 "num_blocks": 196608, 01:22:28.795 "uuid": "cf37ce35-68bd-4ad5-874c-5a925fd04170", 01:22:28.795 "assigned_rate_limits": { 01:22:28.795 "rw_ios_per_sec": 0, 01:22:28.795 "rw_mbytes_per_sec": 0, 01:22:28.795 "r_mbytes_per_sec": 0, 01:22:28.795 "w_mbytes_per_sec": 0 01:22:28.795 }, 01:22:28.795 "claimed": false, 01:22:28.795 "zoned": false, 01:22:28.795 "supported_io_types": { 01:22:28.795 "read": true, 01:22:28.795 "write": true, 01:22:28.795 "unmap": true, 01:22:28.795 "flush": true, 01:22:28.795 "reset": true, 01:22:28.795 "nvme_admin": false, 01:22:28.795 "nvme_io": false, 01:22:28.795 "nvme_io_md": false, 01:22:28.795 "write_zeroes": true, 01:22:28.795 "zcopy": false, 01:22:28.795 "get_zone_info": false, 01:22:28.796 "zone_management": false, 01:22:28.796 "zone_append": false, 01:22:28.796 "compare": false, 01:22:28.796 "compare_and_write": false, 01:22:28.796 "abort": false, 01:22:28.796 "seek_hole": false, 01:22:28.796 "seek_data": false, 01:22:28.796 "copy": false, 01:22:28.796 "nvme_iov_md": false 01:22:28.796 }, 01:22:28.796 "memory_domains": [ 01:22:28.796 { 01:22:28.796 "dma_device_id": "system", 01:22:28.796 "dma_device_type": 1 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:28.796 "dma_device_type": 2 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "dma_device_id": "system", 01:22:28.796 "dma_device_type": 1 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:28.796 "dma_device_type": 2 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "dma_device_id": "system", 01:22:28.796 "dma_device_type": 1 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:28.796 "dma_device_type": 2 01:22:28.796 } 01:22:28.796 ], 01:22:28.796 "driver_specific": { 01:22:28.796 "raid": { 01:22:28.796 "uuid": "cf37ce35-68bd-4ad5-874c-5a925fd04170", 01:22:28.796 "strip_size_kb": 64, 01:22:28.796 "state": "online", 01:22:28.796 "raid_level": "raid0", 01:22:28.796 "superblock": false, 01:22:28.796 "num_base_bdevs": 3, 01:22:28.796 "num_base_bdevs_discovered": 3, 01:22:28.796 "num_base_bdevs_operational": 3, 01:22:28.796 "base_bdevs_list": [ 01:22:28.796 { 01:22:28.796 "name": "BaseBdev1", 01:22:28.796 "uuid": "07b2aa95-e69a-4203-a459-146a2ad8eb55", 01:22:28.796 "is_configured": true, 01:22:28.796 "data_offset": 0, 01:22:28.796 "data_size": 65536 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "name": "BaseBdev2", 01:22:28.796 "uuid": "aefed887-3cb1-47e8-8803-3068cae78dd5", 01:22:28.796 "is_configured": true, 01:22:28.796 "data_offset": 0, 01:22:28.796 "data_size": 65536 01:22:28.796 }, 01:22:28.796 { 01:22:28.796 "name": "BaseBdev3", 01:22:28.796 "uuid": "caa47576-3310-4fbf-8e51-5b5212c73ce9", 01:22:28.796 "is_configured": true, 01:22:28.796 "data_offset": 0, 01:22:28.796 "data_size": 65536 01:22:28.796 } 01:22:28.796 ] 01:22:28.796 } 01:22:28.796 } 01:22:28.796 }' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:22:28.796 BaseBdev2 01:22:28.796 BaseBdev3' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:28.796 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.054 [2024-12-09 05:17:20.525467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:29.054 [2024-12-09 05:17:20.525711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:29.054 [2024-12-09 05:17:20.525813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:29.054 "name": "Existed_Raid", 01:22:29.054 "uuid": "cf37ce35-68bd-4ad5-874c-5a925fd04170", 01:22:29.054 "strip_size_kb": 64, 01:22:29.054 "state": "offline", 01:22:29.054 "raid_level": "raid0", 01:22:29.054 "superblock": false, 01:22:29.054 "num_base_bdevs": 3, 01:22:29.054 "num_base_bdevs_discovered": 2, 01:22:29.054 "num_base_bdevs_operational": 2, 01:22:29.054 "base_bdevs_list": [ 01:22:29.054 { 01:22:29.054 "name": null, 01:22:29.054 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:29.054 "is_configured": false, 01:22:29.054 "data_offset": 0, 01:22:29.054 "data_size": 65536 01:22:29.054 }, 01:22:29.054 { 01:22:29.054 "name": "BaseBdev2", 01:22:29.054 "uuid": "aefed887-3cb1-47e8-8803-3068cae78dd5", 01:22:29.054 "is_configured": true, 01:22:29.054 "data_offset": 0, 01:22:29.054 "data_size": 65536 01:22:29.054 }, 01:22:29.054 { 01:22:29.054 "name": "BaseBdev3", 01:22:29.054 "uuid": "caa47576-3310-4fbf-8e51-5b5212c73ce9", 01:22:29.054 "is_configured": true, 01:22:29.054 "data_offset": 0, 01:22:29.054 "data_size": 65536 01:22:29.054 } 01:22:29.054 ] 01:22:29.054 }' 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:29.054 05:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.619 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.619 [2024-12-09 05:17:21.198719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.877 [2024-12-09 05:17:21.364713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:22:29.877 [2024-12-09 05:17:21.364783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:29.877 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.135 BaseBdev2 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.135 [ 01:22:30.135 { 01:22:30.135 "name": "BaseBdev2", 01:22:30.135 "aliases": [ 01:22:30.135 "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331" 01:22:30.135 ], 01:22:30.135 "product_name": "Malloc disk", 01:22:30.135 "block_size": 512, 01:22:30.135 "num_blocks": 65536, 01:22:30.135 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:30.135 "assigned_rate_limits": { 01:22:30.135 "rw_ios_per_sec": 0, 01:22:30.135 "rw_mbytes_per_sec": 0, 01:22:30.135 "r_mbytes_per_sec": 0, 01:22:30.135 "w_mbytes_per_sec": 0 01:22:30.135 }, 01:22:30.135 "claimed": false, 01:22:30.135 "zoned": false, 01:22:30.135 "supported_io_types": { 01:22:30.135 "read": true, 01:22:30.135 "write": true, 01:22:30.135 "unmap": true, 01:22:30.135 "flush": true, 01:22:30.135 "reset": true, 01:22:30.135 "nvme_admin": false, 01:22:30.135 "nvme_io": false, 01:22:30.135 "nvme_io_md": false, 01:22:30.135 "write_zeroes": true, 01:22:30.135 "zcopy": true, 01:22:30.135 "get_zone_info": false, 01:22:30.135 "zone_management": false, 01:22:30.135 "zone_append": false, 01:22:30.135 "compare": false, 01:22:30.135 "compare_and_write": false, 01:22:30.135 "abort": true, 01:22:30.135 "seek_hole": false, 01:22:30.135 "seek_data": false, 01:22:30.135 "copy": true, 01:22:30.135 "nvme_iov_md": false 01:22:30.135 }, 01:22:30.135 "memory_domains": [ 01:22:30.135 { 01:22:30.135 "dma_device_id": "system", 01:22:30.135 "dma_device_type": 1 01:22:30.135 }, 01:22:30.135 { 01:22:30.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:30.135 "dma_device_type": 2 01:22:30.135 } 01:22:30.135 ], 01:22:30.135 "driver_specific": {} 01:22:30.135 } 01:22:30.135 ] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.135 BaseBdev3 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.135 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.135 [ 01:22:30.135 { 01:22:30.135 "name": "BaseBdev3", 01:22:30.135 "aliases": [ 01:22:30.135 "220f31ee-fdae-4e11-b39c-dce24ffdba41" 01:22:30.135 ], 01:22:30.135 "product_name": "Malloc disk", 01:22:30.135 "block_size": 512, 01:22:30.135 "num_blocks": 65536, 01:22:30.135 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:30.135 "assigned_rate_limits": { 01:22:30.135 "rw_ios_per_sec": 0, 01:22:30.135 "rw_mbytes_per_sec": 0, 01:22:30.135 "r_mbytes_per_sec": 0, 01:22:30.135 "w_mbytes_per_sec": 0 01:22:30.135 }, 01:22:30.135 "claimed": false, 01:22:30.135 "zoned": false, 01:22:30.135 "supported_io_types": { 01:22:30.135 "read": true, 01:22:30.135 "write": true, 01:22:30.135 "unmap": true, 01:22:30.135 "flush": true, 01:22:30.135 "reset": true, 01:22:30.135 "nvme_admin": false, 01:22:30.135 "nvme_io": false, 01:22:30.135 "nvme_io_md": false, 01:22:30.135 "write_zeroes": true, 01:22:30.135 "zcopy": true, 01:22:30.135 "get_zone_info": false, 01:22:30.135 "zone_management": false, 01:22:30.135 "zone_append": false, 01:22:30.135 "compare": false, 01:22:30.135 "compare_and_write": false, 01:22:30.135 "abort": true, 01:22:30.135 "seek_hole": false, 01:22:30.136 "seek_data": false, 01:22:30.136 "copy": true, 01:22:30.136 "nvme_iov_md": false 01:22:30.136 }, 01:22:30.136 "memory_domains": [ 01:22:30.136 { 01:22:30.136 "dma_device_id": "system", 01:22:30.136 "dma_device_type": 1 01:22:30.136 }, 01:22:30.136 { 01:22:30.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:30.136 "dma_device_type": 2 01:22:30.136 } 01:22:30.136 ], 01:22:30.136 "driver_specific": {} 01:22:30.136 } 01:22:30.136 ] 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.136 [2024-12-09 05:17:21.686905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:30.136 [2024-12-09 05:17:21.686998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:30.136 [2024-12-09 05:17:21.687031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:30.136 [2024-12-09 05:17:21.689571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.136 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.393 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:30.393 "name": "Existed_Raid", 01:22:30.393 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:30.393 "strip_size_kb": 64, 01:22:30.393 "state": "configuring", 01:22:30.393 "raid_level": "raid0", 01:22:30.393 "superblock": false, 01:22:30.393 "num_base_bdevs": 3, 01:22:30.393 "num_base_bdevs_discovered": 2, 01:22:30.393 "num_base_bdevs_operational": 3, 01:22:30.393 "base_bdevs_list": [ 01:22:30.393 { 01:22:30.393 "name": "BaseBdev1", 01:22:30.393 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:30.393 "is_configured": false, 01:22:30.393 "data_offset": 0, 01:22:30.393 "data_size": 0 01:22:30.393 }, 01:22:30.393 { 01:22:30.393 "name": "BaseBdev2", 01:22:30.393 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:30.393 "is_configured": true, 01:22:30.393 "data_offset": 0, 01:22:30.393 "data_size": 65536 01:22:30.393 }, 01:22:30.393 { 01:22:30.393 "name": "BaseBdev3", 01:22:30.393 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:30.393 "is_configured": true, 01:22:30.393 "data_offset": 0, 01:22:30.393 "data_size": 65536 01:22:30.393 } 01:22:30.393 ] 01:22:30.393 }' 01:22:30.393 05:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:30.393 05:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.958 [2024-12-09 05:17:22.275202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:30.958 "name": "Existed_Raid", 01:22:30.958 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:30.958 "strip_size_kb": 64, 01:22:30.958 "state": "configuring", 01:22:30.958 "raid_level": "raid0", 01:22:30.958 "superblock": false, 01:22:30.958 "num_base_bdevs": 3, 01:22:30.958 "num_base_bdevs_discovered": 1, 01:22:30.958 "num_base_bdevs_operational": 3, 01:22:30.958 "base_bdevs_list": [ 01:22:30.958 { 01:22:30.958 "name": "BaseBdev1", 01:22:30.958 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:30.958 "is_configured": false, 01:22:30.958 "data_offset": 0, 01:22:30.958 "data_size": 0 01:22:30.958 }, 01:22:30.958 { 01:22:30.958 "name": null, 01:22:30.958 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:30.958 "is_configured": false, 01:22:30.958 "data_offset": 0, 01:22:30.958 "data_size": 65536 01:22:30.958 }, 01:22:30.958 { 01:22:30.958 "name": "BaseBdev3", 01:22:30.958 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:30.958 "is_configured": true, 01:22:30.958 "data_offset": 0, 01:22:30.958 "data_size": 65536 01:22:30.958 } 01:22:30.958 ] 01:22:30.958 }' 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:30.958 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:31.217 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:22:31.217 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:31.217 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.217 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:31.217 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:31.475 [2024-12-09 05:17:22.897814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:31.475 BaseBdev1 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:31.475 [ 01:22:31.475 { 01:22:31.475 "name": "BaseBdev1", 01:22:31.475 "aliases": [ 01:22:31.475 "94931259-bfd3-4317-ab55-89248d75e5f3" 01:22:31.475 ], 01:22:31.475 "product_name": "Malloc disk", 01:22:31.475 "block_size": 512, 01:22:31.475 "num_blocks": 65536, 01:22:31.475 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:31.475 "assigned_rate_limits": { 01:22:31.475 "rw_ios_per_sec": 0, 01:22:31.475 "rw_mbytes_per_sec": 0, 01:22:31.475 "r_mbytes_per_sec": 0, 01:22:31.475 "w_mbytes_per_sec": 0 01:22:31.475 }, 01:22:31.475 "claimed": true, 01:22:31.475 "claim_type": "exclusive_write", 01:22:31.475 "zoned": false, 01:22:31.475 "supported_io_types": { 01:22:31.475 "read": true, 01:22:31.475 "write": true, 01:22:31.475 "unmap": true, 01:22:31.475 "flush": true, 01:22:31.475 "reset": true, 01:22:31.475 "nvme_admin": false, 01:22:31.475 "nvme_io": false, 01:22:31.475 "nvme_io_md": false, 01:22:31.475 "write_zeroes": true, 01:22:31.475 "zcopy": true, 01:22:31.475 "get_zone_info": false, 01:22:31.475 "zone_management": false, 01:22:31.475 "zone_append": false, 01:22:31.475 "compare": false, 01:22:31.475 "compare_and_write": false, 01:22:31.475 "abort": true, 01:22:31.475 "seek_hole": false, 01:22:31.475 "seek_data": false, 01:22:31.475 "copy": true, 01:22:31.475 "nvme_iov_md": false 01:22:31.475 }, 01:22:31.475 "memory_domains": [ 01:22:31.475 { 01:22:31.475 "dma_device_id": "system", 01:22:31.475 "dma_device_type": 1 01:22:31.475 }, 01:22:31.475 { 01:22:31.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:31.475 "dma_device_type": 2 01:22:31.475 } 01:22:31.475 ], 01:22:31.475 "driver_specific": {} 01:22:31.475 } 01:22:31.475 ] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:31.475 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:31.476 "name": "Existed_Raid", 01:22:31.476 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:31.476 "strip_size_kb": 64, 01:22:31.476 "state": "configuring", 01:22:31.476 "raid_level": "raid0", 01:22:31.476 "superblock": false, 01:22:31.476 "num_base_bdevs": 3, 01:22:31.476 "num_base_bdevs_discovered": 2, 01:22:31.476 "num_base_bdevs_operational": 3, 01:22:31.476 "base_bdevs_list": [ 01:22:31.476 { 01:22:31.476 "name": "BaseBdev1", 01:22:31.476 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:31.476 "is_configured": true, 01:22:31.476 "data_offset": 0, 01:22:31.476 "data_size": 65536 01:22:31.476 }, 01:22:31.476 { 01:22:31.476 "name": null, 01:22:31.476 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:31.476 "is_configured": false, 01:22:31.476 "data_offset": 0, 01:22:31.476 "data_size": 65536 01:22:31.476 }, 01:22:31.476 { 01:22:31.476 "name": "BaseBdev3", 01:22:31.476 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:31.476 "is_configured": true, 01:22:31.476 "data_offset": 0, 01:22:31.476 "data_size": 65536 01:22:31.476 } 01:22:31.476 ] 01:22:31.476 }' 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:31.476 05:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.043 [2024-12-09 05:17:23.618110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.043 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.302 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:32.302 "name": "Existed_Raid", 01:22:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:32.302 "strip_size_kb": 64, 01:22:32.302 "state": "configuring", 01:22:32.302 "raid_level": "raid0", 01:22:32.302 "superblock": false, 01:22:32.302 "num_base_bdevs": 3, 01:22:32.302 "num_base_bdevs_discovered": 1, 01:22:32.302 "num_base_bdevs_operational": 3, 01:22:32.302 "base_bdevs_list": [ 01:22:32.302 { 01:22:32.302 "name": "BaseBdev1", 01:22:32.302 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:32.302 "is_configured": true, 01:22:32.302 "data_offset": 0, 01:22:32.302 "data_size": 65536 01:22:32.302 }, 01:22:32.302 { 01:22:32.302 "name": null, 01:22:32.302 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:32.302 "is_configured": false, 01:22:32.302 "data_offset": 0, 01:22:32.302 "data_size": 65536 01:22:32.302 }, 01:22:32.302 { 01:22:32.302 "name": null, 01:22:32.302 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:32.302 "is_configured": false, 01:22:32.302 "data_offset": 0, 01:22:32.302 "data_size": 65536 01:22:32.302 } 01:22:32.302 ] 01:22:32.302 }' 01:22:32.302 05:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:32.302 05:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.560 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:32.560 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.560 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.560 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:22:32.560 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.819 [2024-12-09 05:17:24.206278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:32.819 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:32.820 "name": "Existed_Raid", 01:22:32.820 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:32.820 "strip_size_kb": 64, 01:22:32.820 "state": "configuring", 01:22:32.820 "raid_level": "raid0", 01:22:32.820 "superblock": false, 01:22:32.820 "num_base_bdevs": 3, 01:22:32.820 "num_base_bdevs_discovered": 2, 01:22:32.820 "num_base_bdevs_operational": 3, 01:22:32.820 "base_bdevs_list": [ 01:22:32.820 { 01:22:32.820 "name": "BaseBdev1", 01:22:32.820 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:32.820 "is_configured": true, 01:22:32.820 "data_offset": 0, 01:22:32.820 "data_size": 65536 01:22:32.820 }, 01:22:32.820 { 01:22:32.820 "name": null, 01:22:32.820 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:32.820 "is_configured": false, 01:22:32.820 "data_offset": 0, 01:22:32.820 "data_size": 65536 01:22:32.820 }, 01:22:32.820 { 01:22:32.820 "name": "BaseBdev3", 01:22:32.820 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:32.820 "is_configured": true, 01:22:32.820 "data_offset": 0, 01:22:32.820 "data_size": 65536 01:22:32.820 } 01:22:32.820 ] 01:22:32.820 }' 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:32.820 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.402 [2024-12-09 05:17:24.806513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:33.402 "name": "Existed_Raid", 01:22:33.402 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:33.402 "strip_size_kb": 64, 01:22:33.402 "state": "configuring", 01:22:33.402 "raid_level": "raid0", 01:22:33.402 "superblock": false, 01:22:33.402 "num_base_bdevs": 3, 01:22:33.402 "num_base_bdevs_discovered": 1, 01:22:33.402 "num_base_bdevs_operational": 3, 01:22:33.402 "base_bdevs_list": [ 01:22:33.402 { 01:22:33.402 "name": null, 01:22:33.402 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:33.402 "is_configured": false, 01:22:33.402 "data_offset": 0, 01:22:33.402 "data_size": 65536 01:22:33.402 }, 01:22:33.402 { 01:22:33.402 "name": null, 01:22:33.402 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:33.402 "is_configured": false, 01:22:33.402 "data_offset": 0, 01:22:33.402 "data_size": 65536 01:22:33.402 }, 01:22:33.402 { 01:22:33.402 "name": "BaseBdev3", 01:22:33.402 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:33.402 "is_configured": true, 01:22:33.402 "data_offset": 0, 01:22:33.402 "data_size": 65536 01:22:33.402 } 01:22:33.402 ] 01:22:33.402 }' 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:33.402 05:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.970 [2024-12-09 05:17:25.548518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:33.970 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.229 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:34.229 "name": "Existed_Raid", 01:22:34.229 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:34.229 "strip_size_kb": 64, 01:22:34.230 "state": "configuring", 01:22:34.230 "raid_level": "raid0", 01:22:34.230 "superblock": false, 01:22:34.230 "num_base_bdevs": 3, 01:22:34.230 "num_base_bdevs_discovered": 2, 01:22:34.230 "num_base_bdevs_operational": 3, 01:22:34.230 "base_bdevs_list": [ 01:22:34.230 { 01:22:34.230 "name": null, 01:22:34.230 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:34.230 "is_configured": false, 01:22:34.230 "data_offset": 0, 01:22:34.230 "data_size": 65536 01:22:34.230 }, 01:22:34.230 { 01:22:34.230 "name": "BaseBdev2", 01:22:34.230 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:34.230 "is_configured": true, 01:22:34.230 "data_offset": 0, 01:22:34.230 "data_size": 65536 01:22:34.230 }, 01:22:34.230 { 01:22:34.230 "name": "BaseBdev3", 01:22:34.230 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:34.230 "is_configured": true, 01:22:34.230 "data_offset": 0, 01:22:34.230 "data_size": 65536 01:22:34.230 } 01:22:34.230 ] 01:22:34.230 }' 01:22:34.230 05:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:34.230 05:17:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 94931259-bfd3-4317-ab55-89248d75e5f3 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.798 [2024-12-09 05:17:26.288166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:22:34.798 [2024-12-09 05:17:26.288267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:22:34.798 [2024-12-09 05:17:26.288285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:22:34.798 [2024-12-09 05:17:26.288702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:22:34.798 [2024-12-09 05:17:26.288929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:22:34.798 [2024-12-09 05:17:26.288946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:22:34.798 [2024-12-09 05:17:26.289314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:34.798 NewBaseBdev 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:22:34.798 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.799 [ 01:22:34.799 { 01:22:34.799 "name": "NewBaseBdev", 01:22:34.799 "aliases": [ 01:22:34.799 "94931259-bfd3-4317-ab55-89248d75e5f3" 01:22:34.799 ], 01:22:34.799 "product_name": "Malloc disk", 01:22:34.799 "block_size": 512, 01:22:34.799 "num_blocks": 65536, 01:22:34.799 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:34.799 "assigned_rate_limits": { 01:22:34.799 "rw_ios_per_sec": 0, 01:22:34.799 "rw_mbytes_per_sec": 0, 01:22:34.799 "r_mbytes_per_sec": 0, 01:22:34.799 "w_mbytes_per_sec": 0 01:22:34.799 }, 01:22:34.799 "claimed": true, 01:22:34.799 "claim_type": "exclusive_write", 01:22:34.799 "zoned": false, 01:22:34.799 "supported_io_types": { 01:22:34.799 "read": true, 01:22:34.799 "write": true, 01:22:34.799 "unmap": true, 01:22:34.799 "flush": true, 01:22:34.799 "reset": true, 01:22:34.799 "nvme_admin": false, 01:22:34.799 "nvme_io": false, 01:22:34.799 "nvme_io_md": false, 01:22:34.799 "write_zeroes": true, 01:22:34.799 "zcopy": true, 01:22:34.799 "get_zone_info": false, 01:22:34.799 "zone_management": false, 01:22:34.799 "zone_append": false, 01:22:34.799 "compare": false, 01:22:34.799 "compare_and_write": false, 01:22:34.799 "abort": true, 01:22:34.799 "seek_hole": false, 01:22:34.799 "seek_data": false, 01:22:34.799 "copy": true, 01:22:34.799 "nvme_iov_md": false 01:22:34.799 }, 01:22:34.799 "memory_domains": [ 01:22:34.799 { 01:22:34.799 "dma_device_id": "system", 01:22:34.799 "dma_device_type": 1 01:22:34.799 }, 01:22:34.799 { 01:22:34.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:34.799 "dma_device_type": 2 01:22:34.799 } 01:22:34.799 ], 01:22:34.799 "driver_specific": {} 01:22:34.799 } 01:22:34.799 ] 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:34.799 "name": "Existed_Raid", 01:22:34.799 "uuid": "b76c0f98-6b6b-4249-9284-744e18f67233", 01:22:34.799 "strip_size_kb": 64, 01:22:34.799 "state": "online", 01:22:34.799 "raid_level": "raid0", 01:22:34.799 "superblock": false, 01:22:34.799 "num_base_bdevs": 3, 01:22:34.799 "num_base_bdevs_discovered": 3, 01:22:34.799 "num_base_bdevs_operational": 3, 01:22:34.799 "base_bdevs_list": [ 01:22:34.799 { 01:22:34.799 "name": "NewBaseBdev", 01:22:34.799 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:34.799 "is_configured": true, 01:22:34.799 "data_offset": 0, 01:22:34.799 "data_size": 65536 01:22:34.799 }, 01:22:34.799 { 01:22:34.799 "name": "BaseBdev2", 01:22:34.799 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:34.799 "is_configured": true, 01:22:34.799 "data_offset": 0, 01:22:34.799 "data_size": 65536 01:22:34.799 }, 01:22:34.799 { 01:22:34.799 "name": "BaseBdev3", 01:22:34.799 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:34.799 "is_configured": true, 01:22:34.799 "data_offset": 0, 01:22:34.799 "data_size": 65536 01:22:34.799 } 01:22:34.799 ] 01:22:34.799 }' 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:34.799 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:35.366 [2024-12-09 05:17:26.848825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:35.366 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:35.366 "name": "Existed_Raid", 01:22:35.366 "aliases": [ 01:22:35.366 "b76c0f98-6b6b-4249-9284-744e18f67233" 01:22:35.366 ], 01:22:35.366 "product_name": "Raid Volume", 01:22:35.366 "block_size": 512, 01:22:35.366 "num_blocks": 196608, 01:22:35.366 "uuid": "b76c0f98-6b6b-4249-9284-744e18f67233", 01:22:35.366 "assigned_rate_limits": { 01:22:35.366 "rw_ios_per_sec": 0, 01:22:35.366 "rw_mbytes_per_sec": 0, 01:22:35.366 "r_mbytes_per_sec": 0, 01:22:35.366 "w_mbytes_per_sec": 0 01:22:35.366 }, 01:22:35.366 "claimed": false, 01:22:35.366 "zoned": false, 01:22:35.366 "supported_io_types": { 01:22:35.366 "read": true, 01:22:35.366 "write": true, 01:22:35.366 "unmap": true, 01:22:35.366 "flush": true, 01:22:35.366 "reset": true, 01:22:35.366 "nvme_admin": false, 01:22:35.366 "nvme_io": false, 01:22:35.366 "nvme_io_md": false, 01:22:35.366 "write_zeroes": true, 01:22:35.366 "zcopy": false, 01:22:35.366 "get_zone_info": false, 01:22:35.366 "zone_management": false, 01:22:35.366 "zone_append": false, 01:22:35.366 "compare": false, 01:22:35.366 "compare_and_write": false, 01:22:35.366 "abort": false, 01:22:35.366 "seek_hole": false, 01:22:35.366 "seek_data": false, 01:22:35.366 "copy": false, 01:22:35.366 "nvme_iov_md": false 01:22:35.366 }, 01:22:35.366 "memory_domains": [ 01:22:35.366 { 01:22:35.366 "dma_device_id": "system", 01:22:35.366 "dma_device_type": 1 01:22:35.366 }, 01:22:35.366 { 01:22:35.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:35.366 "dma_device_type": 2 01:22:35.366 }, 01:22:35.366 { 01:22:35.366 "dma_device_id": "system", 01:22:35.366 "dma_device_type": 1 01:22:35.366 }, 01:22:35.366 { 01:22:35.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:35.366 "dma_device_type": 2 01:22:35.366 }, 01:22:35.366 { 01:22:35.366 "dma_device_id": "system", 01:22:35.366 "dma_device_type": 1 01:22:35.366 }, 01:22:35.366 { 01:22:35.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:35.366 "dma_device_type": 2 01:22:35.366 } 01:22:35.366 ], 01:22:35.366 "driver_specific": { 01:22:35.366 "raid": { 01:22:35.366 "uuid": "b76c0f98-6b6b-4249-9284-744e18f67233", 01:22:35.366 "strip_size_kb": 64, 01:22:35.366 "state": "online", 01:22:35.366 "raid_level": "raid0", 01:22:35.366 "superblock": false, 01:22:35.366 "num_base_bdevs": 3, 01:22:35.366 "num_base_bdevs_discovered": 3, 01:22:35.366 "num_base_bdevs_operational": 3, 01:22:35.366 "base_bdevs_list": [ 01:22:35.366 { 01:22:35.366 "name": "NewBaseBdev", 01:22:35.366 "uuid": "94931259-bfd3-4317-ab55-89248d75e5f3", 01:22:35.366 "is_configured": true, 01:22:35.366 "data_offset": 0, 01:22:35.366 "data_size": 65536 01:22:35.366 }, 01:22:35.367 { 01:22:35.367 "name": "BaseBdev2", 01:22:35.367 "uuid": "dc0536cc-d00e-43d5-a3c3-6c3f3d7d4331", 01:22:35.367 "is_configured": true, 01:22:35.367 "data_offset": 0, 01:22:35.367 "data_size": 65536 01:22:35.367 }, 01:22:35.367 { 01:22:35.367 "name": "BaseBdev3", 01:22:35.367 "uuid": "220f31ee-fdae-4e11-b39c-dce24ffdba41", 01:22:35.367 "is_configured": true, 01:22:35.367 "data_offset": 0, 01:22:35.367 "data_size": 65536 01:22:35.367 } 01:22:35.367 ] 01:22:35.367 } 01:22:35.367 } 01:22:35.367 }' 01:22:35.367 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:35.367 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:22:35.367 BaseBdev2 01:22:35.367 BaseBdev3' 01:22:35.367 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:35.628 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:35.628 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:35.628 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:22:35.628 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:35.628 05:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:35.628 05:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:35.628 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:35.629 [2024-12-09 05:17:27.140488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:35.629 [2024-12-09 05:17:27.140556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:35.629 [2024-12-09 05:17:27.140707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:35.629 [2024-12-09 05:17:27.140798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:35.629 [2024-12-09 05:17:27.140822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63671 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63671 ']' 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63671 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63671 01:22:35.629 killing process with pid 63671 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63671' 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63671 01:22:35.629 05:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63671 01:22:35.629 [2024-12-09 05:17:27.181533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:36.196 [2024-12-09 05:17:27.533963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:37.568 05:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:22:37.568 01:22:37.568 real 0m12.770s 01:22:37.568 user 0m20.807s 01:22:37.568 sys 0m1.743s 01:22:37.568 05:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:37.568 05:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:22:37.568 ************************************ 01:22:37.568 END TEST raid_state_function_test 01:22:37.568 ************************************ 01:22:37.568 05:17:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 01:22:37.568 05:17:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:22:37.568 05:17:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:37.568 05:17:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:37.568 ************************************ 01:22:37.568 START TEST raid_state_function_test_sb 01:22:37.568 ************************************ 01:22:37.568 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 01:22:37.568 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 01:22:37.568 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:22:37.569 Process raid pid: 64315 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64315 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64315' 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64315 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64315 ']' 01:22:37.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:37.569 05:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:37.569 [2024-12-09 05:17:29.142330] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:37.569 [2024-12-09 05:17:29.142529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:22:37.826 [2024-12-09 05:17:29.320568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:38.084 [2024-12-09 05:17:29.481122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:38.342 [2024-12-09 05:17:29.756163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:38.342 [2024-12-09 05:17:29.756244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:38.599 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:38.599 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:22:38.599 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:38.599 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:38.599 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:38.599 [2024-12-09 05:17:30.157220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:38.599 [2024-12-09 05:17:30.157339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:38.600 [2024-12-09 05:17:30.157371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:38.600 [2024-12-09 05:17:30.157397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:38.600 [2024-12-09 05:17:30.157408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:22:38.600 [2024-12-09 05:17:30.157425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:38.600 "name": "Existed_Raid", 01:22:38.600 "uuid": "4c75f267-1cdd-45e0-9e11-62adc0932595", 01:22:38.600 "strip_size_kb": 64, 01:22:38.600 "state": "configuring", 01:22:38.600 "raid_level": "raid0", 01:22:38.600 "superblock": true, 01:22:38.600 "num_base_bdevs": 3, 01:22:38.600 "num_base_bdevs_discovered": 0, 01:22:38.600 "num_base_bdevs_operational": 3, 01:22:38.600 "base_bdevs_list": [ 01:22:38.600 { 01:22:38.600 "name": "BaseBdev1", 01:22:38.600 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:38.600 "is_configured": false, 01:22:38.600 "data_offset": 0, 01:22:38.600 "data_size": 0 01:22:38.600 }, 01:22:38.600 { 01:22:38.600 "name": "BaseBdev2", 01:22:38.600 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:38.600 "is_configured": false, 01:22:38.600 "data_offset": 0, 01:22:38.600 "data_size": 0 01:22:38.600 }, 01:22:38.600 { 01:22:38.600 "name": "BaseBdev3", 01:22:38.600 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:38.600 "is_configured": false, 01:22:38.600 "data_offset": 0, 01:22:38.600 "data_size": 0 01:22:38.600 } 01:22:38.600 ] 01:22:38.600 }' 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:38.600 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.235 [2024-12-09 05:17:30.621303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:39.235 [2024-12-09 05:17:30.621415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.235 [2024-12-09 05:17:30.629218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:39.235 [2024-12-09 05:17:30.629288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:39.235 [2024-12-09 05:17:30.629304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:39.235 [2024-12-09 05:17:30.629323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:39.235 [2024-12-09 05:17:30.629333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:22:39.235 [2024-12-09 05:17:30.629348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.235 [2024-12-09 05:17:30.685639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:39.235 BaseBdev1 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.235 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.235 [ 01:22:39.235 { 01:22:39.235 "name": "BaseBdev1", 01:22:39.235 "aliases": [ 01:22:39.235 "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6" 01:22:39.235 ], 01:22:39.235 "product_name": "Malloc disk", 01:22:39.235 "block_size": 512, 01:22:39.235 "num_blocks": 65536, 01:22:39.235 "uuid": "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6", 01:22:39.235 "assigned_rate_limits": { 01:22:39.235 "rw_ios_per_sec": 0, 01:22:39.235 "rw_mbytes_per_sec": 0, 01:22:39.235 "r_mbytes_per_sec": 0, 01:22:39.235 "w_mbytes_per_sec": 0 01:22:39.235 }, 01:22:39.235 "claimed": true, 01:22:39.235 "claim_type": "exclusive_write", 01:22:39.235 "zoned": false, 01:22:39.235 "supported_io_types": { 01:22:39.235 "read": true, 01:22:39.235 "write": true, 01:22:39.235 "unmap": true, 01:22:39.235 "flush": true, 01:22:39.235 "reset": true, 01:22:39.235 "nvme_admin": false, 01:22:39.235 "nvme_io": false, 01:22:39.235 "nvme_io_md": false, 01:22:39.235 "write_zeroes": true, 01:22:39.235 "zcopy": true, 01:22:39.235 "get_zone_info": false, 01:22:39.235 "zone_management": false, 01:22:39.235 "zone_append": false, 01:22:39.235 "compare": false, 01:22:39.235 "compare_and_write": false, 01:22:39.235 "abort": true, 01:22:39.235 "seek_hole": false, 01:22:39.235 "seek_data": false, 01:22:39.235 "copy": true, 01:22:39.235 "nvme_iov_md": false 01:22:39.235 }, 01:22:39.235 "memory_domains": [ 01:22:39.235 { 01:22:39.235 "dma_device_id": "system", 01:22:39.236 "dma_device_type": 1 01:22:39.236 }, 01:22:39.236 { 01:22:39.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:39.236 "dma_device_type": 2 01:22:39.236 } 01:22:39.236 ], 01:22:39.236 "driver_specific": {} 01:22:39.236 } 01:22:39.236 ] 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:39.236 "name": "Existed_Raid", 01:22:39.236 "uuid": "5b0dec35-4e00-4bf1-aba1-cc8c46bc8fc4", 01:22:39.236 "strip_size_kb": 64, 01:22:39.236 "state": "configuring", 01:22:39.236 "raid_level": "raid0", 01:22:39.236 "superblock": true, 01:22:39.236 "num_base_bdevs": 3, 01:22:39.236 "num_base_bdevs_discovered": 1, 01:22:39.236 "num_base_bdevs_operational": 3, 01:22:39.236 "base_bdevs_list": [ 01:22:39.236 { 01:22:39.236 "name": "BaseBdev1", 01:22:39.236 "uuid": "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6", 01:22:39.236 "is_configured": true, 01:22:39.236 "data_offset": 2048, 01:22:39.236 "data_size": 63488 01:22:39.236 }, 01:22:39.236 { 01:22:39.236 "name": "BaseBdev2", 01:22:39.236 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:39.236 "is_configured": false, 01:22:39.236 "data_offset": 0, 01:22:39.236 "data_size": 0 01:22:39.236 }, 01:22:39.236 { 01:22:39.236 "name": "BaseBdev3", 01:22:39.236 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:39.236 "is_configured": false, 01:22:39.236 "data_offset": 0, 01:22:39.236 "data_size": 0 01:22:39.236 } 01:22:39.236 ] 01:22:39.236 }' 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:39.236 05:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.804 [2024-12-09 05:17:31.233915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:39.804 [2024-12-09 05:17:31.234041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.804 [2024-12-09 05:17:31.241972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:39.804 [2024-12-09 05:17:31.245031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:22:39.804 [2024-12-09 05:17:31.245208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:22:39.804 [2024-12-09 05:17:31.245333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:22:39.804 [2024-12-09 05:17:31.245496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:39.804 "name": "Existed_Raid", 01:22:39.804 "uuid": "47e4a5e1-aa73-4c17-b59b-84f82a55b00d", 01:22:39.804 "strip_size_kb": 64, 01:22:39.804 "state": "configuring", 01:22:39.804 "raid_level": "raid0", 01:22:39.804 "superblock": true, 01:22:39.804 "num_base_bdevs": 3, 01:22:39.804 "num_base_bdevs_discovered": 1, 01:22:39.804 "num_base_bdevs_operational": 3, 01:22:39.804 "base_bdevs_list": [ 01:22:39.804 { 01:22:39.804 "name": "BaseBdev1", 01:22:39.804 "uuid": "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6", 01:22:39.804 "is_configured": true, 01:22:39.804 "data_offset": 2048, 01:22:39.804 "data_size": 63488 01:22:39.804 }, 01:22:39.804 { 01:22:39.804 "name": "BaseBdev2", 01:22:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:39.804 "is_configured": false, 01:22:39.804 "data_offset": 0, 01:22:39.804 "data_size": 0 01:22:39.804 }, 01:22:39.804 { 01:22:39.804 "name": "BaseBdev3", 01:22:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:39.804 "is_configured": false, 01:22:39.804 "data_offset": 0, 01:22:39.804 "data_size": 0 01:22:39.804 } 01:22:39.804 ] 01:22:39.804 }' 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:39.804 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.372 [2024-12-09 05:17:31.826516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:40.372 BaseBdev2 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.372 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.372 [ 01:22:40.372 { 01:22:40.372 "name": "BaseBdev2", 01:22:40.372 "aliases": [ 01:22:40.372 "98224535-de78-4a30-a305-f6d58025c483" 01:22:40.372 ], 01:22:40.372 "product_name": "Malloc disk", 01:22:40.372 "block_size": 512, 01:22:40.372 "num_blocks": 65536, 01:22:40.372 "uuid": "98224535-de78-4a30-a305-f6d58025c483", 01:22:40.372 "assigned_rate_limits": { 01:22:40.372 "rw_ios_per_sec": 0, 01:22:40.372 "rw_mbytes_per_sec": 0, 01:22:40.372 "r_mbytes_per_sec": 0, 01:22:40.372 "w_mbytes_per_sec": 0 01:22:40.372 }, 01:22:40.372 "claimed": true, 01:22:40.372 "claim_type": "exclusive_write", 01:22:40.373 "zoned": false, 01:22:40.373 "supported_io_types": { 01:22:40.373 "read": true, 01:22:40.373 "write": true, 01:22:40.373 "unmap": true, 01:22:40.373 "flush": true, 01:22:40.373 "reset": true, 01:22:40.373 "nvme_admin": false, 01:22:40.373 "nvme_io": false, 01:22:40.373 "nvme_io_md": false, 01:22:40.373 "write_zeroes": true, 01:22:40.373 "zcopy": true, 01:22:40.373 "get_zone_info": false, 01:22:40.373 "zone_management": false, 01:22:40.373 "zone_append": false, 01:22:40.373 "compare": false, 01:22:40.373 "compare_and_write": false, 01:22:40.373 "abort": true, 01:22:40.373 "seek_hole": false, 01:22:40.373 "seek_data": false, 01:22:40.373 "copy": true, 01:22:40.373 "nvme_iov_md": false 01:22:40.373 }, 01:22:40.373 "memory_domains": [ 01:22:40.373 { 01:22:40.373 "dma_device_id": "system", 01:22:40.373 "dma_device_type": 1 01:22:40.373 }, 01:22:40.373 { 01:22:40.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:40.373 "dma_device_type": 2 01:22:40.373 } 01:22:40.373 ], 01:22:40.373 "driver_specific": {} 01:22:40.373 } 01:22:40.373 ] 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:40.373 "name": "Existed_Raid", 01:22:40.373 "uuid": "47e4a5e1-aa73-4c17-b59b-84f82a55b00d", 01:22:40.373 "strip_size_kb": 64, 01:22:40.373 "state": "configuring", 01:22:40.373 "raid_level": "raid0", 01:22:40.373 "superblock": true, 01:22:40.373 "num_base_bdevs": 3, 01:22:40.373 "num_base_bdevs_discovered": 2, 01:22:40.373 "num_base_bdevs_operational": 3, 01:22:40.373 "base_bdevs_list": [ 01:22:40.373 { 01:22:40.373 "name": "BaseBdev1", 01:22:40.373 "uuid": "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6", 01:22:40.373 "is_configured": true, 01:22:40.373 "data_offset": 2048, 01:22:40.373 "data_size": 63488 01:22:40.373 }, 01:22:40.373 { 01:22:40.373 "name": "BaseBdev2", 01:22:40.373 "uuid": "98224535-de78-4a30-a305-f6d58025c483", 01:22:40.373 "is_configured": true, 01:22:40.373 "data_offset": 2048, 01:22:40.373 "data_size": 63488 01:22:40.373 }, 01:22:40.373 { 01:22:40.373 "name": "BaseBdev3", 01:22:40.373 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:40.373 "is_configured": false, 01:22:40.373 "data_offset": 0, 01:22:40.373 "data_size": 0 01:22:40.373 } 01:22:40.373 ] 01:22:40.373 }' 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:40.373 05:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.940 [2024-12-09 05:17:32.435256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:40.940 [2024-12-09 05:17:32.435672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:40.940 [2024-12-09 05:17:32.435705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:22:40.940 [2024-12-09 05:17:32.436073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:22:40.940 BaseBdev3 01:22:40.940 [2024-12-09 05:17:32.436298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:40.940 [2024-12-09 05:17:32.436317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:22:40.940 [2024-12-09 05:17:32.436540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.940 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.940 [ 01:22:40.940 { 01:22:40.940 "name": "BaseBdev3", 01:22:40.940 "aliases": [ 01:22:40.940 "720c2053-a429-48d2-b6be-f8ae7debd3cb" 01:22:40.940 ], 01:22:40.940 "product_name": "Malloc disk", 01:22:40.940 "block_size": 512, 01:22:40.940 "num_blocks": 65536, 01:22:40.940 "uuid": "720c2053-a429-48d2-b6be-f8ae7debd3cb", 01:22:40.940 "assigned_rate_limits": { 01:22:40.940 "rw_ios_per_sec": 0, 01:22:40.940 "rw_mbytes_per_sec": 0, 01:22:40.940 "r_mbytes_per_sec": 0, 01:22:40.940 "w_mbytes_per_sec": 0 01:22:40.940 }, 01:22:40.940 "claimed": true, 01:22:40.940 "claim_type": "exclusive_write", 01:22:40.940 "zoned": false, 01:22:40.940 "supported_io_types": { 01:22:40.940 "read": true, 01:22:40.940 "write": true, 01:22:40.940 "unmap": true, 01:22:40.940 "flush": true, 01:22:40.940 "reset": true, 01:22:40.940 "nvme_admin": false, 01:22:40.940 "nvme_io": false, 01:22:40.941 "nvme_io_md": false, 01:22:40.941 "write_zeroes": true, 01:22:40.941 "zcopy": true, 01:22:40.941 "get_zone_info": false, 01:22:40.941 "zone_management": false, 01:22:40.941 "zone_append": false, 01:22:40.941 "compare": false, 01:22:40.941 "compare_and_write": false, 01:22:40.941 "abort": true, 01:22:40.941 "seek_hole": false, 01:22:40.941 "seek_data": false, 01:22:40.941 "copy": true, 01:22:40.941 "nvme_iov_md": false 01:22:40.941 }, 01:22:40.941 "memory_domains": [ 01:22:40.941 { 01:22:40.941 "dma_device_id": "system", 01:22:40.941 "dma_device_type": 1 01:22:40.941 }, 01:22:40.941 { 01:22:40.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:40.941 "dma_device_type": 2 01:22:40.941 } 01:22:40.941 ], 01:22:40.941 "driver_specific": {} 01:22:40.941 } 01:22:40.941 ] 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:40.941 "name": "Existed_Raid", 01:22:40.941 "uuid": "47e4a5e1-aa73-4c17-b59b-84f82a55b00d", 01:22:40.941 "strip_size_kb": 64, 01:22:40.941 "state": "online", 01:22:40.941 "raid_level": "raid0", 01:22:40.941 "superblock": true, 01:22:40.941 "num_base_bdevs": 3, 01:22:40.941 "num_base_bdevs_discovered": 3, 01:22:40.941 "num_base_bdevs_operational": 3, 01:22:40.941 "base_bdevs_list": [ 01:22:40.941 { 01:22:40.941 "name": "BaseBdev1", 01:22:40.941 "uuid": "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6", 01:22:40.941 "is_configured": true, 01:22:40.941 "data_offset": 2048, 01:22:40.941 "data_size": 63488 01:22:40.941 }, 01:22:40.941 { 01:22:40.941 "name": "BaseBdev2", 01:22:40.941 "uuid": "98224535-de78-4a30-a305-f6d58025c483", 01:22:40.941 "is_configured": true, 01:22:40.941 "data_offset": 2048, 01:22:40.941 "data_size": 63488 01:22:40.941 }, 01:22:40.941 { 01:22:40.941 "name": "BaseBdev3", 01:22:40.941 "uuid": "720c2053-a429-48d2-b6be-f8ae7debd3cb", 01:22:40.941 "is_configured": true, 01:22:40.941 "data_offset": 2048, 01:22:40.941 "data_size": 63488 01:22:40.941 } 01:22:40.941 ] 01:22:40.941 }' 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:40.941 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:41.508 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:22:41.508 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:22:41.508 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:41.509 [2024-12-09 05:17:32.980626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:41.509 05:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:41.509 "name": "Existed_Raid", 01:22:41.509 "aliases": [ 01:22:41.509 "47e4a5e1-aa73-4c17-b59b-84f82a55b00d" 01:22:41.509 ], 01:22:41.509 "product_name": "Raid Volume", 01:22:41.509 "block_size": 512, 01:22:41.509 "num_blocks": 190464, 01:22:41.509 "uuid": "47e4a5e1-aa73-4c17-b59b-84f82a55b00d", 01:22:41.509 "assigned_rate_limits": { 01:22:41.509 "rw_ios_per_sec": 0, 01:22:41.509 "rw_mbytes_per_sec": 0, 01:22:41.509 "r_mbytes_per_sec": 0, 01:22:41.509 "w_mbytes_per_sec": 0 01:22:41.509 }, 01:22:41.509 "claimed": false, 01:22:41.509 "zoned": false, 01:22:41.509 "supported_io_types": { 01:22:41.509 "read": true, 01:22:41.509 "write": true, 01:22:41.509 "unmap": true, 01:22:41.509 "flush": true, 01:22:41.509 "reset": true, 01:22:41.509 "nvme_admin": false, 01:22:41.509 "nvme_io": false, 01:22:41.509 "nvme_io_md": false, 01:22:41.509 "write_zeroes": true, 01:22:41.509 "zcopy": false, 01:22:41.509 "get_zone_info": false, 01:22:41.509 "zone_management": false, 01:22:41.509 "zone_append": false, 01:22:41.509 "compare": false, 01:22:41.509 "compare_and_write": false, 01:22:41.509 "abort": false, 01:22:41.509 "seek_hole": false, 01:22:41.509 "seek_data": false, 01:22:41.509 "copy": false, 01:22:41.509 "nvme_iov_md": false 01:22:41.509 }, 01:22:41.509 "memory_domains": [ 01:22:41.509 { 01:22:41.509 "dma_device_id": "system", 01:22:41.509 "dma_device_type": 1 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:41.509 "dma_device_type": 2 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "dma_device_id": "system", 01:22:41.509 "dma_device_type": 1 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:41.509 "dma_device_type": 2 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "dma_device_id": "system", 01:22:41.509 "dma_device_type": 1 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:41.509 "dma_device_type": 2 01:22:41.509 } 01:22:41.509 ], 01:22:41.509 "driver_specific": { 01:22:41.509 "raid": { 01:22:41.509 "uuid": "47e4a5e1-aa73-4c17-b59b-84f82a55b00d", 01:22:41.509 "strip_size_kb": 64, 01:22:41.509 "state": "online", 01:22:41.509 "raid_level": "raid0", 01:22:41.509 "superblock": true, 01:22:41.509 "num_base_bdevs": 3, 01:22:41.509 "num_base_bdevs_discovered": 3, 01:22:41.509 "num_base_bdevs_operational": 3, 01:22:41.509 "base_bdevs_list": [ 01:22:41.509 { 01:22:41.509 "name": "BaseBdev1", 01:22:41.509 "uuid": "e1bbd5c5-6d1c-461a-9bc3-2230862f04c6", 01:22:41.509 "is_configured": true, 01:22:41.509 "data_offset": 2048, 01:22:41.509 "data_size": 63488 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "name": "BaseBdev2", 01:22:41.509 "uuid": "98224535-de78-4a30-a305-f6d58025c483", 01:22:41.509 "is_configured": true, 01:22:41.509 "data_offset": 2048, 01:22:41.509 "data_size": 63488 01:22:41.509 }, 01:22:41.509 { 01:22:41.509 "name": "BaseBdev3", 01:22:41.509 "uuid": "720c2053-a429-48d2-b6be-f8ae7debd3cb", 01:22:41.509 "is_configured": true, 01:22:41.509 "data_offset": 2048, 01:22:41.509 "data_size": 63488 01:22:41.509 } 01:22:41.509 ] 01:22:41.509 } 01:22:41.509 } 01:22:41.509 }' 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:22:41.509 BaseBdev2 01:22:41.509 BaseBdev3' 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.509 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:41.767 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:41.768 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.768 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:41.768 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:41.768 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:22:41.768 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.768 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:41.768 [2024-12-09 05:17:33.280311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:41.768 [2024-12-09 05:17:33.280385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:41.768 [2024-12-09 05:17:33.280480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:42.027 "name": "Existed_Raid", 01:22:42.027 "uuid": "47e4a5e1-aa73-4c17-b59b-84f82a55b00d", 01:22:42.027 "strip_size_kb": 64, 01:22:42.027 "state": "offline", 01:22:42.027 "raid_level": "raid0", 01:22:42.027 "superblock": true, 01:22:42.027 "num_base_bdevs": 3, 01:22:42.027 "num_base_bdevs_discovered": 2, 01:22:42.027 "num_base_bdevs_operational": 2, 01:22:42.027 "base_bdevs_list": [ 01:22:42.027 { 01:22:42.027 "name": null, 01:22:42.027 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:42.027 "is_configured": false, 01:22:42.027 "data_offset": 0, 01:22:42.027 "data_size": 63488 01:22:42.027 }, 01:22:42.027 { 01:22:42.027 "name": "BaseBdev2", 01:22:42.027 "uuid": "98224535-de78-4a30-a305-f6d58025c483", 01:22:42.027 "is_configured": true, 01:22:42.027 "data_offset": 2048, 01:22:42.027 "data_size": 63488 01:22:42.027 }, 01:22:42.027 { 01:22:42.027 "name": "BaseBdev3", 01:22:42.027 "uuid": "720c2053-a429-48d2-b6be-f8ae7debd3cb", 01:22:42.027 "is_configured": true, 01:22:42.027 "data_offset": 2048, 01:22:42.027 "data_size": 63488 01:22:42.027 } 01:22:42.027 ] 01:22:42.027 }' 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:42.027 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.594 05:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.594 [2024-12-09 05:17:34.005053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:22:42.594 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.594 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:22:42.594 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:42.594 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.595 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.595 [2024-12-09 05:17:34.174499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:22:42.595 [2024-12-09 05:17:34.174611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.853 BaseBdev2 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.853 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:42.853 [ 01:22:42.853 { 01:22:42.853 "name": "BaseBdev2", 01:22:42.853 "aliases": [ 01:22:42.853 "9a8f7ebe-c003-469c-8431-90a4705c1d54" 01:22:42.853 ], 01:22:42.853 "product_name": "Malloc disk", 01:22:42.853 "block_size": 512, 01:22:42.853 "num_blocks": 65536, 01:22:42.853 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:42.853 "assigned_rate_limits": { 01:22:42.853 "rw_ios_per_sec": 0, 01:22:42.853 "rw_mbytes_per_sec": 0, 01:22:42.853 "r_mbytes_per_sec": 0, 01:22:42.853 "w_mbytes_per_sec": 0 01:22:42.853 }, 01:22:42.853 "claimed": false, 01:22:42.853 "zoned": false, 01:22:42.853 "supported_io_types": { 01:22:42.853 "read": true, 01:22:42.853 "write": true, 01:22:42.853 "unmap": true, 01:22:42.853 "flush": true, 01:22:42.853 "reset": true, 01:22:42.853 "nvme_admin": false, 01:22:42.853 "nvme_io": false, 01:22:42.853 "nvme_io_md": false, 01:22:42.853 "write_zeroes": true, 01:22:42.853 "zcopy": true, 01:22:42.854 "get_zone_info": false, 01:22:42.854 "zone_management": false, 01:22:42.854 "zone_append": false, 01:22:42.854 "compare": false, 01:22:42.854 "compare_and_write": false, 01:22:42.854 "abort": true, 01:22:42.854 "seek_hole": false, 01:22:42.854 "seek_data": false, 01:22:42.854 "copy": true, 01:22:42.854 "nvme_iov_md": false 01:22:42.854 }, 01:22:42.854 "memory_domains": [ 01:22:42.854 { 01:22:42.854 "dma_device_id": "system", 01:22:42.854 "dma_device_type": 1 01:22:42.854 }, 01:22:42.854 { 01:22:42.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:42.854 "dma_device_type": 2 01:22:42.854 } 01:22:42.854 ], 01:22:42.854 "driver_specific": {} 01:22:42.854 } 01:22:42.854 ] 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.854 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.113 BaseBdev3 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.113 [ 01:22:43.113 { 01:22:43.113 "name": "BaseBdev3", 01:22:43.113 "aliases": [ 01:22:43.113 "0e27479b-1b85-488d-9ce2-7f84ce6949d8" 01:22:43.113 ], 01:22:43.113 "product_name": "Malloc disk", 01:22:43.113 "block_size": 512, 01:22:43.113 "num_blocks": 65536, 01:22:43.113 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:43.113 "assigned_rate_limits": { 01:22:43.113 "rw_ios_per_sec": 0, 01:22:43.113 "rw_mbytes_per_sec": 0, 01:22:43.113 "r_mbytes_per_sec": 0, 01:22:43.113 "w_mbytes_per_sec": 0 01:22:43.113 }, 01:22:43.113 "claimed": false, 01:22:43.113 "zoned": false, 01:22:43.113 "supported_io_types": { 01:22:43.113 "read": true, 01:22:43.113 "write": true, 01:22:43.113 "unmap": true, 01:22:43.113 "flush": true, 01:22:43.113 "reset": true, 01:22:43.113 "nvme_admin": false, 01:22:43.113 "nvme_io": false, 01:22:43.113 "nvme_io_md": false, 01:22:43.113 "write_zeroes": true, 01:22:43.113 "zcopy": true, 01:22:43.113 "get_zone_info": false, 01:22:43.113 "zone_management": false, 01:22:43.113 "zone_append": false, 01:22:43.113 "compare": false, 01:22:43.113 "compare_and_write": false, 01:22:43.113 "abort": true, 01:22:43.113 "seek_hole": false, 01:22:43.113 "seek_data": false, 01:22:43.113 "copy": true, 01:22:43.113 "nvme_iov_md": false 01:22:43.113 }, 01:22:43.113 "memory_domains": [ 01:22:43.113 { 01:22:43.113 "dma_device_id": "system", 01:22:43.113 "dma_device_type": 1 01:22:43.113 }, 01:22:43.113 { 01:22:43.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:43.113 "dma_device_type": 2 01:22:43.113 } 01:22:43.113 ], 01:22:43.113 "driver_specific": {} 01:22:43.113 } 01:22:43.113 ] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.113 [2024-12-09 05:17:34.507684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:22:43.113 [2024-12-09 05:17:34.508064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:22:43.113 [2024-12-09 05:17:34.508126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:43.113 [2024-12-09 05:17:34.510890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.113 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:43.114 "name": "Existed_Raid", 01:22:43.114 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:43.114 "strip_size_kb": 64, 01:22:43.114 "state": "configuring", 01:22:43.114 "raid_level": "raid0", 01:22:43.114 "superblock": true, 01:22:43.114 "num_base_bdevs": 3, 01:22:43.114 "num_base_bdevs_discovered": 2, 01:22:43.114 "num_base_bdevs_operational": 3, 01:22:43.114 "base_bdevs_list": [ 01:22:43.114 { 01:22:43.114 "name": "BaseBdev1", 01:22:43.114 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:43.114 "is_configured": false, 01:22:43.114 "data_offset": 0, 01:22:43.114 "data_size": 0 01:22:43.114 }, 01:22:43.114 { 01:22:43.114 "name": "BaseBdev2", 01:22:43.114 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:43.114 "is_configured": true, 01:22:43.114 "data_offset": 2048, 01:22:43.114 "data_size": 63488 01:22:43.114 }, 01:22:43.114 { 01:22:43.114 "name": "BaseBdev3", 01:22:43.114 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:43.114 "is_configured": true, 01:22:43.114 "data_offset": 2048, 01:22:43.114 "data_size": 63488 01:22:43.114 } 01:22:43.114 ] 01:22:43.114 }' 01:22:43.114 05:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:43.114 05:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.680 [2024-12-09 05:17:35.084348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:43.680 "name": "Existed_Raid", 01:22:43.680 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:43.680 "strip_size_kb": 64, 01:22:43.680 "state": "configuring", 01:22:43.680 "raid_level": "raid0", 01:22:43.680 "superblock": true, 01:22:43.680 "num_base_bdevs": 3, 01:22:43.680 "num_base_bdevs_discovered": 1, 01:22:43.680 "num_base_bdevs_operational": 3, 01:22:43.680 "base_bdevs_list": [ 01:22:43.680 { 01:22:43.680 "name": "BaseBdev1", 01:22:43.680 "uuid": "00000000-0000-0000-0000-000000000000", 01:22:43.680 "is_configured": false, 01:22:43.680 "data_offset": 0, 01:22:43.680 "data_size": 0 01:22:43.680 }, 01:22:43.680 { 01:22:43.680 "name": null, 01:22:43.680 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:43.680 "is_configured": false, 01:22:43.680 "data_offset": 0, 01:22:43.680 "data_size": 63488 01:22:43.680 }, 01:22:43.680 { 01:22:43.680 "name": "BaseBdev3", 01:22:43.680 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:43.680 "is_configured": true, 01:22:43.680 "data_offset": 2048, 01:22:43.680 "data_size": 63488 01:22:43.680 } 01:22:43.680 ] 01:22:43.680 }' 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:43.680 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.251 [2024-12-09 05:17:35.705332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:44.251 BaseBdev1 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.251 [ 01:22:44.251 { 01:22:44.251 "name": "BaseBdev1", 01:22:44.251 "aliases": [ 01:22:44.251 "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a" 01:22:44.251 ], 01:22:44.251 "product_name": "Malloc disk", 01:22:44.251 "block_size": 512, 01:22:44.251 "num_blocks": 65536, 01:22:44.251 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:44.251 "assigned_rate_limits": { 01:22:44.251 "rw_ios_per_sec": 0, 01:22:44.251 "rw_mbytes_per_sec": 0, 01:22:44.251 "r_mbytes_per_sec": 0, 01:22:44.251 "w_mbytes_per_sec": 0 01:22:44.251 }, 01:22:44.251 "claimed": true, 01:22:44.251 "claim_type": "exclusive_write", 01:22:44.251 "zoned": false, 01:22:44.251 "supported_io_types": { 01:22:44.251 "read": true, 01:22:44.251 "write": true, 01:22:44.251 "unmap": true, 01:22:44.251 "flush": true, 01:22:44.251 "reset": true, 01:22:44.251 "nvme_admin": false, 01:22:44.251 "nvme_io": false, 01:22:44.251 "nvme_io_md": false, 01:22:44.251 "write_zeroes": true, 01:22:44.251 "zcopy": true, 01:22:44.251 "get_zone_info": false, 01:22:44.251 "zone_management": false, 01:22:44.251 "zone_append": false, 01:22:44.251 "compare": false, 01:22:44.251 "compare_and_write": false, 01:22:44.251 "abort": true, 01:22:44.251 "seek_hole": false, 01:22:44.251 "seek_data": false, 01:22:44.251 "copy": true, 01:22:44.251 "nvme_iov_md": false 01:22:44.251 }, 01:22:44.251 "memory_domains": [ 01:22:44.251 { 01:22:44.251 "dma_device_id": "system", 01:22:44.251 "dma_device_type": 1 01:22:44.251 }, 01:22:44.251 { 01:22:44.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:44.251 "dma_device_type": 2 01:22:44.251 } 01:22:44.251 ], 01:22:44.251 "driver_specific": {} 01:22:44.251 } 01:22:44.251 ] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:44.251 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:44.252 "name": "Existed_Raid", 01:22:44.252 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:44.252 "strip_size_kb": 64, 01:22:44.252 "state": "configuring", 01:22:44.252 "raid_level": "raid0", 01:22:44.252 "superblock": true, 01:22:44.252 "num_base_bdevs": 3, 01:22:44.252 "num_base_bdevs_discovered": 2, 01:22:44.252 "num_base_bdevs_operational": 3, 01:22:44.252 "base_bdevs_list": [ 01:22:44.252 { 01:22:44.252 "name": "BaseBdev1", 01:22:44.252 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:44.252 "is_configured": true, 01:22:44.252 "data_offset": 2048, 01:22:44.252 "data_size": 63488 01:22:44.252 }, 01:22:44.252 { 01:22:44.252 "name": null, 01:22:44.252 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:44.252 "is_configured": false, 01:22:44.252 "data_offset": 0, 01:22:44.252 "data_size": 63488 01:22:44.252 }, 01:22:44.252 { 01:22:44.252 "name": "BaseBdev3", 01:22:44.252 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:44.252 "is_configured": true, 01:22:44.252 "data_offset": 2048, 01:22:44.252 "data_size": 63488 01:22:44.252 } 01:22:44.252 ] 01:22:44.252 }' 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:44.252 05:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.818 [2024-12-09 05:17:36.345597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:44.818 "name": "Existed_Raid", 01:22:44.818 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:44.818 "strip_size_kb": 64, 01:22:44.818 "state": "configuring", 01:22:44.818 "raid_level": "raid0", 01:22:44.818 "superblock": true, 01:22:44.818 "num_base_bdevs": 3, 01:22:44.818 "num_base_bdevs_discovered": 1, 01:22:44.818 "num_base_bdevs_operational": 3, 01:22:44.818 "base_bdevs_list": [ 01:22:44.818 { 01:22:44.818 "name": "BaseBdev1", 01:22:44.818 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:44.818 "is_configured": true, 01:22:44.818 "data_offset": 2048, 01:22:44.818 "data_size": 63488 01:22:44.818 }, 01:22:44.818 { 01:22:44.818 "name": null, 01:22:44.818 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:44.818 "is_configured": false, 01:22:44.818 "data_offset": 0, 01:22:44.818 "data_size": 63488 01:22:44.818 }, 01:22:44.818 { 01:22:44.818 "name": null, 01:22:44.818 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:44.818 "is_configured": false, 01:22:44.818 "data_offset": 0, 01:22:44.818 "data_size": 63488 01:22:44.818 } 01:22:44.818 ] 01:22:44.818 }' 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:44.818 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.384 [2024-12-09 05:17:36.894180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:45.384 "name": "Existed_Raid", 01:22:45.384 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:45.384 "strip_size_kb": 64, 01:22:45.384 "state": "configuring", 01:22:45.384 "raid_level": "raid0", 01:22:45.384 "superblock": true, 01:22:45.384 "num_base_bdevs": 3, 01:22:45.384 "num_base_bdevs_discovered": 2, 01:22:45.384 "num_base_bdevs_operational": 3, 01:22:45.384 "base_bdevs_list": [ 01:22:45.384 { 01:22:45.384 "name": "BaseBdev1", 01:22:45.384 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:45.384 "is_configured": true, 01:22:45.384 "data_offset": 2048, 01:22:45.384 "data_size": 63488 01:22:45.384 }, 01:22:45.384 { 01:22:45.384 "name": null, 01:22:45.384 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:45.384 "is_configured": false, 01:22:45.384 "data_offset": 0, 01:22:45.384 "data_size": 63488 01:22:45.384 }, 01:22:45.384 { 01:22:45.384 "name": "BaseBdev3", 01:22:45.384 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:45.384 "is_configured": true, 01:22:45.384 "data_offset": 2048, 01:22:45.384 "data_size": 63488 01:22:45.384 } 01:22:45.384 ] 01:22:45.384 }' 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:45.384 05:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.950 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:45.950 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.951 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:45.951 [2024-12-09 05:17:37.526113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:46.209 "name": "Existed_Raid", 01:22:46.209 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:46.209 "strip_size_kb": 64, 01:22:46.209 "state": "configuring", 01:22:46.209 "raid_level": "raid0", 01:22:46.209 "superblock": true, 01:22:46.209 "num_base_bdevs": 3, 01:22:46.209 "num_base_bdevs_discovered": 1, 01:22:46.209 "num_base_bdevs_operational": 3, 01:22:46.209 "base_bdevs_list": [ 01:22:46.209 { 01:22:46.209 "name": null, 01:22:46.209 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:46.209 "is_configured": false, 01:22:46.209 "data_offset": 0, 01:22:46.209 "data_size": 63488 01:22:46.209 }, 01:22:46.209 { 01:22:46.209 "name": null, 01:22:46.209 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:46.209 "is_configured": false, 01:22:46.209 "data_offset": 0, 01:22:46.209 "data_size": 63488 01:22:46.209 }, 01:22:46.209 { 01:22:46.209 "name": "BaseBdev3", 01:22:46.209 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:46.209 "is_configured": true, 01:22:46.209 "data_offset": 2048, 01:22:46.209 "data_size": 63488 01:22:46.209 } 01:22:46.209 ] 01:22:46.209 }' 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:46.209 05:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:46.776 [2024-12-09 05:17:38.258495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:46.776 "name": "Existed_Raid", 01:22:46.776 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:46.776 "strip_size_kb": 64, 01:22:46.776 "state": "configuring", 01:22:46.776 "raid_level": "raid0", 01:22:46.776 "superblock": true, 01:22:46.776 "num_base_bdevs": 3, 01:22:46.776 "num_base_bdevs_discovered": 2, 01:22:46.776 "num_base_bdevs_operational": 3, 01:22:46.776 "base_bdevs_list": [ 01:22:46.776 { 01:22:46.776 "name": null, 01:22:46.776 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:46.776 "is_configured": false, 01:22:46.776 "data_offset": 0, 01:22:46.776 "data_size": 63488 01:22:46.776 }, 01:22:46.776 { 01:22:46.776 "name": "BaseBdev2", 01:22:46.776 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:46.776 "is_configured": true, 01:22:46.776 "data_offset": 2048, 01:22:46.776 "data_size": 63488 01:22:46.776 }, 01:22:46.776 { 01:22:46.776 "name": "BaseBdev3", 01:22:46.776 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:46.776 "is_configured": true, 01:22:46.776 "data_offset": 2048, 01:22:46.776 "data_size": 63488 01:22:46.776 } 01:22:46.776 ] 01:22:46.776 }' 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:46.776 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:47.342 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.343 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.343 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:22:47.343 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.343 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5a5f1d9a-35a4-445e-89ee-75cbbe15e77a 01:22:47.343 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.343 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.601 [2024-12-09 05:17:38.987072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:22:47.601 [2024-12-09 05:17:38.987492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:22:47.601 [2024-12-09 05:17:38.987523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:22:47.601 NewBaseBdev 01:22:47.601 [2024-12-09 05:17:38.987915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:22:47.601 [2024-12-09 05:17:38.988137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:22:47.601 [2024-12-09 05:17:38.988154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:22:47.601 [2024-12-09 05:17:38.988343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.601 05:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.601 [ 01:22:47.601 { 01:22:47.601 "name": "NewBaseBdev", 01:22:47.601 "aliases": [ 01:22:47.601 "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a" 01:22:47.601 ], 01:22:47.601 "product_name": "Malloc disk", 01:22:47.601 "block_size": 512, 01:22:47.601 "num_blocks": 65536, 01:22:47.601 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:47.601 "assigned_rate_limits": { 01:22:47.601 "rw_ios_per_sec": 0, 01:22:47.601 "rw_mbytes_per_sec": 0, 01:22:47.601 "r_mbytes_per_sec": 0, 01:22:47.601 "w_mbytes_per_sec": 0 01:22:47.601 }, 01:22:47.601 "claimed": true, 01:22:47.601 "claim_type": "exclusive_write", 01:22:47.601 "zoned": false, 01:22:47.601 "supported_io_types": { 01:22:47.601 "read": true, 01:22:47.601 "write": true, 01:22:47.601 "unmap": true, 01:22:47.601 "flush": true, 01:22:47.601 "reset": true, 01:22:47.601 "nvme_admin": false, 01:22:47.601 "nvme_io": false, 01:22:47.601 "nvme_io_md": false, 01:22:47.601 "write_zeroes": true, 01:22:47.601 "zcopy": true, 01:22:47.601 "get_zone_info": false, 01:22:47.601 "zone_management": false, 01:22:47.601 "zone_append": false, 01:22:47.601 "compare": false, 01:22:47.601 "compare_and_write": false, 01:22:47.601 "abort": true, 01:22:47.601 "seek_hole": false, 01:22:47.601 "seek_data": false, 01:22:47.601 "copy": true, 01:22:47.601 "nvme_iov_md": false 01:22:47.601 }, 01:22:47.601 "memory_domains": [ 01:22:47.601 { 01:22:47.601 "dma_device_id": "system", 01:22:47.601 "dma_device_type": 1 01:22:47.601 }, 01:22:47.601 { 01:22:47.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:47.601 "dma_device_type": 2 01:22:47.601 } 01:22:47.601 ], 01:22:47.601 "driver_specific": {} 01:22:47.601 } 01:22:47.601 ] 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:47.601 "name": "Existed_Raid", 01:22:47.601 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:47.601 "strip_size_kb": 64, 01:22:47.601 "state": "online", 01:22:47.601 "raid_level": "raid0", 01:22:47.601 "superblock": true, 01:22:47.601 "num_base_bdevs": 3, 01:22:47.601 "num_base_bdevs_discovered": 3, 01:22:47.601 "num_base_bdevs_operational": 3, 01:22:47.601 "base_bdevs_list": [ 01:22:47.601 { 01:22:47.601 "name": "NewBaseBdev", 01:22:47.601 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:47.601 "is_configured": true, 01:22:47.601 "data_offset": 2048, 01:22:47.601 "data_size": 63488 01:22:47.601 }, 01:22:47.601 { 01:22:47.601 "name": "BaseBdev2", 01:22:47.601 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:47.601 "is_configured": true, 01:22:47.601 "data_offset": 2048, 01:22:47.601 "data_size": 63488 01:22:47.601 }, 01:22:47.601 { 01:22:47.601 "name": "BaseBdev3", 01:22:47.601 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:47.601 "is_configured": true, 01:22:47.601 "data_offset": 2048, 01:22:47.601 "data_size": 63488 01:22:47.601 } 01:22:47.601 ] 01:22:47.601 }' 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:47.601 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:48.167 [2024-12-09 05:17:39.607753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:48.167 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:48.167 "name": "Existed_Raid", 01:22:48.167 "aliases": [ 01:22:48.167 "d8bbe54c-2749-4675-aaf6-d86be8b04f37" 01:22:48.167 ], 01:22:48.167 "product_name": "Raid Volume", 01:22:48.167 "block_size": 512, 01:22:48.167 "num_blocks": 190464, 01:22:48.167 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:48.167 "assigned_rate_limits": { 01:22:48.167 "rw_ios_per_sec": 0, 01:22:48.168 "rw_mbytes_per_sec": 0, 01:22:48.168 "r_mbytes_per_sec": 0, 01:22:48.168 "w_mbytes_per_sec": 0 01:22:48.168 }, 01:22:48.168 "claimed": false, 01:22:48.168 "zoned": false, 01:22:48.168 "supported_io_types": { 01:22:48.168 "read": true, 01:22:48.168 "write": true, 01:22:48.168 "unmap": true, 01:22:48.168 "flush": true, 01:22:48.168 "reset": true, 01:22:48.168 "nvme_admin": false, 01:22:48.168 "nvme_io": false, 01:22:48.168 "nvme_io_md": false, 01:22:48.168 "write_zeroes": true, 01:22:48.168 "zcopy": false, 01:22:48.168 "get_zone_info": false, 01:22:48.168 "zone_management": false, 01:22:48.168 "zone_append": false, 01:22:48.168 "compare": false, 01:22:48.168 "compare_and_write": false, 01:22:48.168 "abort": false, 01:22:48.168 "seek_hole": false, 01:22:48.168 "seek_data": false, 01:22:48.168 "copy": false, 01:22:48.168 "nvme_iov_md": false 01:22:48.168 }, 01:22:48.168 "memory_domains": [ 01:22:48.168 { 01:22:48.168 "dma_device_id": "system", 01:22:48.168 "dma_device_type": 1 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:48.168 "dma_device_type": 2 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "dma_device_id": "system", 01:22:48.168 "dma_device_type": 1 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:48.168 "dma_device_type": 2 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "dma_device_id": "system", 01:22:48.168 "dma_device_type": 1 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:48.168 "dma_device_type": 2 01:22:48.168 } 01:22:48.168 ], 01:22:48.168 "driver_specific": { 01:22:48.168 "raid": { 01:22:48.168 "uuid": "d8bbe54c-2749-4675-aaf6-d86be8b04f37", 01:22:48.168 "strip_size_kb": 64, 01:22:48.168 "state": "online", 01:22:48.168 "raid_level": "raid0", 01:22:48.168 "superblock": true, 01:22:48.168 "num_base_bdevs": 3, 01:22:48.168 "num_base_bdevs_discovered": 3, 01:22:48.168 "num_base_bdevs_operational": 3, 01:22:48.168 "base_bdevs_list": [ 01:22:48.168 { 01:22:48.168 "name": "NewBaseBdev", 01:22:48.168 "uuid": "5a5f1d9a-35a4-445e-89ee-75cbbe15e77a", 01:22:48.168 "is_configured": true, 01:22:48.168 "data_offset": 2048, 01:22:48.168 "data_size": 63488 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "name": "BaseBdev2", 01:22:48.168 "uuid": "9a8f7ebe-c003-469c-8431-90a4705c1d54", 01:22:48.168 "is_configured": true, 01:22:48.168 "data_offset": 2048, 01:22:48.168 "data_size": 63488 01:22:48.168 }, 01:22:48.168 { 01:22:48.168 "name": "BaseBdev3", 01:22:48.168 "uuid": "0e27479b-1b85-488d-9ce2-7f84ce6949d8", 01:22:48.168 "is_configured": true, 01:22:48.168 "data_offset": 2048, 01:22:48.168 "data_size": 63488 01:22:48.168 } 01:22:48.168 ] 01:22:48.168 } 01:22:48.168 } 01:22:48.168 }' 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:22:48.168 BaseBdev2 01:22:48.168 BaseBdev3' 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:48.168 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:48.426 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:48.427 [2024-12-09 05:17:39.939448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:22:48.427 [2024-12-09 05:17:39.939520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:48.427 [2024-12-09 05:17:39.939674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:48.427 [2024-12-09 05:17:39.939774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:48.427 [2024-12-09 05:17:39.939798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64315 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64315 ']' 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64315 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64315 01:22:48.427 killing process with pid 64315 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64315' 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64315 01:22:48.427 05:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64315 01:22:48.427 [2024-12-09 05:17:39.993644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:48.994 [2024-12-09 05:17:40.416400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:50.370 ************************************ 01:22:50.370 END TEST raid_state_function_test_sb 01:22:50.370 ************************************ 01:22:50.370 05:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:22:50.370 01:22:50.370 real 0m12.936s 01:22:50.370 user 0m20.572s 01:22:50.370 sys 0m2.005s 01:22:50.370 05:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:50.370 05:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:22:50.629 05:17:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 01:22:50.629 05:17:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:22:50.629 05:17:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:50.629 05:17:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:50.629 ************************************ 01:22:50.629 START TEST raid_superblock_test 01:22:50.629 ************************************ 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64963 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64963 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64963 ']' 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:50.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:50.629 05:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:50.629 [2024-12-09 05:17:42.152893] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:50.630 [2024-12-09 05:17:42.153131] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64963 ] 01:22:50.888 [2024-12-09 05:17:42.350235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:51.146 [2024-12-09 05:17:42.538193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:51.404 [2024-12-09 05:17:42.845043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:51.404 [2024-12-09 05:17:42.845150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.662 malloc1 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.662 [2024-12-09 05:17:43.251642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:22:51.662 [2024-12-09 05:17:43.251766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:51.662 [2024-12-09 05:17:43.251807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:22:51.662 [2024-12-09 05:17:43.251823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:51.662 [2024-12-09 05:17:43.255063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:51.662 [2024-12-09 05:17:43.255112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:22:51.662 pt1 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.662 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.921 malloc2 01:22:51.921 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.921 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:22:51.921 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.921 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.921 [2024-12-09 05:17:43.320386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:22:51.921 [2024-12-09 05:17:43.320509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:51.921 [2024-12-09 05:17:43.320568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:22:51.921 [2024-12-09 05:17:43.320585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:51.921 [2024-12-09 05:17:43.323909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:51.921 [2024-12-09 05:17:43.324107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:22:51.921 pt2 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.922 malloc3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.922 [2024-12-09 05:17:43.403441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:22:51.922 [2024-12-09 05:17:43.403760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:51.922 [2024-12-09 05:17:43.403810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:22:51.922 [2024-12-09 05:17:43.403829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:51.922 [2024-12-09 05:17:43.407919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:51.922 [2024-12-09 05:17:43.407985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:22:51.922 pt3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.922 [2024-12-09 05:17:43.416398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:22:51.922 [2024-12-09 05:17:43.419315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:51.922 [2024-12-09 05:17:43.419590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:22:51.922 [2024-12-09 05:17:43.419855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:22:51.922 [2024-12-09 05:17:43.419879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:22:51.922 [2024-12-09 05:17:43.420282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:22:51.922 [2024-12-09 05:17:43.420580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:22:51.922 [2024-12-09 05:17:43.420598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:22:51.922 [2024-12-09 05:17:43.420939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:51.922 "name": "raid_bdev1", 01:22:51.922 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:51.922 "strip_size_kb": 64, 01:22:51.922 "state": "online", 01:22:51.922 "raid_level": "raid0", 01:22:51.922 "superblock": true, 01:22:51.922 "num_base_bdevs": 3, 01:22:51.922 "num_base_bdevs_discovered": 3, 01:22:51.922 "num_base_bdevs_operational": 3, 01:22:51.922 "base_bdevs_list": [ 01:22:51.922 { 01:22:51.922 "name": "pt1", 01:22:51.922 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:51.922 "is_configured": true, 01:22:51.922 "data_offset": 2048, 01:22:51.922 "data_size": 63488 01:22:51.922 }, 01:22:51.922 { 01:22:51.922 "name": "pt2", 01:22:51.922 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:51.922 "is_configured": true, 01:22:51.922 "data_offset": 2048, 01:22:51.922 "data_size": 63488 01:22:51.922 }, 01:22:51.922 { 01:22:51.922 "name": "pt3", 01:22:51.922 "uuid": "00000000-0000-0000-0000-000000000003", 01:22:51.922 "is_configured": true, 01:22:51.922 "data_offset": 2048, 01:22:51.922 "data_size": 63488 01:22:51.922 } 01:22:51.922 ] 01:22:51.922 }' 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:51.922 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:52.489 [2024-12-09 05:17:43.965479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:52.489 05:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.489 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:52.489 "name": "raid_bdev1", 01:22:52.489 "aliases": [ 01:22:52.489 "9fc8a35a-47d2-4882-85c4-c96f9e624ce0" 01:22:52.489 ], 01:22:52.489 "product_name": "Raid Volume", 01:22:52.489 "block_size": 512, 01:22:52.489 "num_blocks": 190464, 01:22:52.489 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:52.489 "assigned_rate_limits": { 01:22:52.489 "rw_ios_per_sec": 0, 01:22:52.489 "rw_mbytes_per_sec": 0, 01:22:52.489 "r_mbytes_per_sec": 0, 01:22:52.489 "w_mbytes_per_sec": 0 01:22:52.489 }, 01:22:52.489 "claimed": false, 01:22:52.489 "zoned": false, 01:22:52.489 "supported_io_types": { 01:22:52.489 "read": true, 01:22:52.489 "write": true, 01:22:52.489 "unmap": true, 01:22:52.489 "flush": true, 01:22:52.489 "reset": true, 01:22:52.489 "nvme_admin": false, 01:22:52.489 "nvme_io": false, 01:22:52.489 "nvme_io_md": false, 01:22:52.489 "write_zeroes": true, 01:22:52.489 "zcopy": false, 01:22:52.489 "get_zone_info": false, 01:22:52.489 "zone_management": false, 01:22:52.489 "zone_append": false, 01:22:52.489 "compare": false, 01:22:52.489 "compare_and_write": false, 01:22:52.489 "abort": false, 01:22:52.489 "seek_hole": false, 01:22:52.489 "seek_data": false, 01:22:52.489 "copy": false, 01:22:52.489 "nvme_iov_md": false 01:22:52.489 }, 01:22:52.489 "memory_domains": [ 01:22:52.489 { 01:22:52.489 "dma_device_id": "system", 01:22:52.489 "dma_device_type": 1 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:52.489 "dma_device_type": 2 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "dma_device_id": "system", 01:22:52.489 "dma_device_type": 1 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:52.489 "dma_device_type": 2 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "dma_device_id": "system", 01:22:52.489 "dma_device_type": 1 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:52.489 "dma_device_type": 2 01:22:52.489 } 01:22:52.489 ], 01:22:52.489 "driver_specific": { 01:22:52.489 "raid": { 01:22:52.489 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:52.489 "strip_size_kb": 64, 01:22:52.489 "state": "online", 01:22:52.489 "raid_level": "raid0", 01:22:52.489 "superblock": true, 01:22:52.489 "num_base_bdevs": 3, 01:22:52.489 "num_base_bdevs_discovered": 3, 01:22:52.489 "num_base_bdevs_operational": 3, 01:22:52.489 "base_bdevs_list": [ 01:22:52.489 { 01:22:52.489 "name": "pt1", 01:22:52.489 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:52.489 "is_configured": true, 01:22:52.489 "data_offset": 2048, 01:22:52.489 "data_size": 63488 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "name": "pt2", 01:22:52.489 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:52.489 "is_configured": true, 01:22:52.489 "data_offset": 2048, 01:22:52.489 "data_size": 63488 01:22:52.489 }, 01:22:52.489 { 01:22:52.489 "name": "pt3", 01:22:52.489 "uuid": "00000000-0000-0000-0000-000000000003", 01:22:52.489 "is_configured": true, 01:22:52.489 "data_offset": 2048, 01:22:52.489 "data_size": 63488 01:22:52.489 } 01:22:52.489 ] 01:22:52.489 } 01:22:52.489 } 01:22:52.489 }' 01:22:52.489 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:52.489 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:22:52.489 pt2 01:22:52.489 pt3' 01:22:52.489 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.748 [2024-12-09 05:17:44.289532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9fc8a35a-47d2-4882-85c4-c96f9e624ce0 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9fc8a35a-47d2-4882-85c4-c96f9e624ce0 ']' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.748 [2024-12-09 05:17:44.329212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:52.748 [2024-12-09 05:17:44.329721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:22:52.748 [2024-12-09 05:17:44.330034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:52.748 [2024-12-09 05:17:44.330291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:52.748 [2024-12-09 05:17:44.330320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:52.748 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.007 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.007 [2024-12-09 05:17:44.457294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:22:53.007 [2024-12-09 05:17:44.460381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:22:53.007 [2024-12-09 05:17:44.460484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:22:53.007 [2024-12-09 05:17:44.460596] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:22:53.007 [2024-12-09 05:17:44.460720] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:22:53.007 [2024-12-09 05:17:44.460755] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:22:53.008 [2024-12-09 05:17:44.460784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:22:53.008 [2024-12-09 05:17:44.460803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:22:53.008 request: 01:22:53.008 { 01:22:53.008 "name": "raid_bdev1", 01:22:53.008 "raid_level": "raid0", 01:22:53.008 "base_bdevs": [ 01:22:53.008 "malloc1", 01:22:53.008 "malloc2", 01:22:53.008 "malloc3" 01:22:53.008 ], 01:22:53.008 "strip_size_kb": 64, 01:22:53.008 "superblock": false, 01:22:53.008 "method": "bdev_raid_create", 01:22:53.008 "req_id": 1 01:22:53.008 } 01:22:53.008 Got JSON-RPC error response 01:22:53.008 response: 01:22:53.008 { 01:22:53.008 "code": -17, 01:22:53.008 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:22:53.008 } 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.008 [2024-12-09 05:17:44.533233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:22:53.008 [2024-12-09 05:17:44.533674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:53.008 [2024-12-09 05:17:44.533846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:22:53.008 [2024-12-09 05:17:44.533988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:53.008 [2024-12-09 05:17:44.537493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:53.008 [2024-12-09 05:17:44.537670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:22:53.008 [2024-12-09 05:17:44.537951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:22:53.008 [2024-12-09 05:17:44.538140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:22:53.008 pt1 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:53.008 "name": "raid_bdev1", 01:22:53.008 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:53.008 "strip_size_kb": 64, 01:22:53.008 "state": "configuring", 01:22:53.008 "raid_level": "raid0", 01:22:53.008 "superblock": true, 01:22:53.008 "num_base_bdevs": 3, 01:22:53.008 "num_base_bdevs_discovered": 1, 01:22:53.008 "num_base_bdevs_operational": 3, 01:22:53.008 "base_bdevs_list": [ 01:22:53.008 { 01:22:53.008 "name": "pt1", 01:22:53.008 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:53.008 "is_configured": true, 01:22:53.008 "data_offset": 2048, 01:22:53.008 "data_size": 63488 01:22:53.008 }, 01:22:53.008 { 01:22:53.008 "name": null, 01:22:53.008 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:53.008 "is_configured": false, 01:22:53.008 "data_offset": 2048, 01:22:53.008 "data_size": 63488 01:22:53.008 }, 01:22:53.008 { 01:22:53.008 "name": null, 01:22:53.008 "uuid": "00000000-0000-0000-0000-000000000003", 01:22:53.008 "is_configured": false, 01:22:53.008 "data_offset": 2048, 01:22:53.008 "data_size": 63488 01:22:53.008 } 01:22:53.008 ] 01:22:53.008 }' 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:53.008 05:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.581 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.582 [2024-12-09 05:17:45.082256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:22:53.582 [2024-12-09 05:17:45.082413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:53.582 [2024-12-09 05:17:45.082465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 01:22:53.582 [2024-12-09 05:17:45.082482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:53.582 [2024-12-09 05:17:45.083257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:53.582 [2024-12-09 05:17:45.083293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:22:53.582 [2024-12-09 05:17:45.083484] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:22:53.582 [2024-12-09 05:17:45.083539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:53.582 pt2 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.582 [2024-12-09 05:17:45.094312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:53.582 "name": "raid_bdev1", 01:22:53.582 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:53.582 "strip_size_kb": 64, 01:22:53.582 "state": "configuring", 01:22:53.582 "raid_level": "raid0", 01:22:53.582 "superblock": true, 01:22:53.582 "num_base_bdevs": 3, 01:22:53.582 "num_base_bdevs_discovered": 1, 01:22:53.582 "num_base_bdevs_operational": 3, 01:22:53.582 "base_bdevs_list": [ 01:22:53.582 { 01:22:53.582 "name": "pt1", 01:22:53.582 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:53.582 "is_configured": true, 01:22:53.582 "data_offset": 2048, 01:22:53.582 "data_size": 63488 01:22:53.582 }, 01:22:53.582 { 01:22:53.582 "name": null, 01:22:53.582 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:53.582 "is_configured": false, 01:22:53.582 "data_offset": 0, 01:22:53.582 "data_size": 63488 01:22:53.582 }, 01:22:53.582 { 01:22:53.582 "name": null, 01:22:53.582 "uuid": "00000000-0000-0000-0000-000000000003", 01:22:53.582 "is_configured": false, 01:22:53.582 "data_offset": 2048, 01:22:53.582 "data_size": 63488 01:22:53.582 } 01:22:53.582 ] 01:22:53.582 }' 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:53.582 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.151 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:22:54.151 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:22:54.151 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:22:54.151 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.151 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.152 [2024-12-09 05:17:45.602381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:22:54.152 [2024-12-09 05:17:45.602542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:54.152 [2024-12-09 05:17:45.602578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 01:22:54.152 [2024-12-09 05:17:45.602598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:54.152 [2024-12-09 05:17:45.603391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:54.152 [2024-12-09 05:17:45.603425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:22:54.152 [2024-12-09 05:17:45.603556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:22:54.152 [2024-12-09 05:17:45.603598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:22:54.152 pt2 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.152 [2024-12-09 05:17:45.610321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:22:54.152 [2024-12-09 05:17:45.610774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:54.152 [2024-12-09 05:17:45.610814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:22:54.152 [2024-12-09 05:17:45.610835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:54.152 [2024-12-09 05:17:45.611528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:54.152 [2024-12-09 05:17:45.611579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:22:54.152 [2024-12-09 05:17:45.611713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:22:54.152 [2024-12-09 05:17:45.611764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:22:54.152 [2024-12-09 05:17:45.611961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:22:54.152 [2024-12-09 05:17:45.611983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:22:54.152 [2024-12-09 05:17:45.612340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:22:54.152 [2024-12-09 05:17:45.612580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:22:54.152 [2024-12-09 05:17:45.612595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:22:54.152 [2024-12-09 05:17:45.612782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:54.152 pt3 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:54.152 "name": "raid_bdev1", 01:22:54.152 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:54.152 "strip_size_kb": 64, 01:22:54.152 "state": "online", 01:22:54.152 "raid_level": "raid0", 01:22:54.152 "superblock": true, 01:22:54.152 "num_base_bdevs": 3, 01:22:54.152 "num_base_bdevs_discovered": 3, 01:22:54.152 "num_base_bdevs_operational": 3, 01:22:54.152 "base_bdevs_list": [ 01:22:54.152 { 01:22:54.152 "name": "pt1", 01:22:54.152 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:54.152 "is_configured": true, 01:22:54.152 "data_offset": 2048, 01:22:54.152 "data_size": 63488 01:22:54.152 }, 01:22:54.152 { 01:22:54.152 "name": "pt2", 01:22:54.152 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:54.152 "is_configured": true, 01:22:54.152 "data_offset": 2048, 01:22:54.152 "data_size": 63488 01:22:54.152 }, 01:22:54.152 { 01:22:54.152 "name": "pt3", 01:22:54.152 "uuid": "00000000-0000-0000-0000-000000000003", 01:22:54.152 "is_configured": true, 01:22:54.152 "data_offset": 2048, 01:22:54.152 "data_size": 63488 01:22:54.152 } 01:22:54.152 ] 01:22:54.152 }' 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:54.152 05:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:22:54.766 [2024-12-09 05:17:46.098933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:22:54.766 "name": "raid_bdev1", 01:22:54.766 "aliases": [ 01:22:54.766 "9fc8a35a-47d2-4882-85c4-c96f9e624ce0" 01:22:54.766 ], 01:22:54.766 "product_name": "Raid Volume", 01:22:54.766 "block_size": 512, 01:22:54.766 "num_blocks": 190464, 01:22:54.766 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:54.766 "assigned_rate_limits": { 01:22:54.766 "rw_ios_per_sec": 0, 01:22:54.766 "rw_mbytes_per_sec": 0, 01:22:54.766 "r_mbytes_per_sec": 0, 01:22:54.766 "w_mbytes_per_sec": 0 01:22:54.766 }, 01:22:54.766 "claimed": false, 01:22:54.766 "zoned": false, 01:22:54.766 "supported_io_types": { 01:22:54.766 "read": true, 01:22:54.766 "write": true, 01:22:54.766 "unmap": true, 01:22:54.766 "flush": true, 01:22:54.766 "reset": true, 01:22:54.766 "nvme_admin": false, 01:22:54.766 "nvme_io": false, 01:22:54.766 "nvme_io_md": false, 01:22:54.766 "write_zeroes": true, 01:22:54.766 "zcopy": false, 01:22:54.766 "get_zone_info": false, 01:22:54.766 "zone_management": false, 01:22:54.766 "zone_append": false, 01:22:54.766 "compare": false, 01:22:54.766 "compare_and_write": false, 01:22:54.766 "abort": false, 01:22:54.766 "seek_hole": false, 01:22:54.766 "seek_data": false, 01:22:54.766 "copy": false, 01:22:54.766 "nvme_iov_md": false 01:22:54.766 }, 01:22:54.766 "memory_domains": [ 01:22:54.766 { 01:22:54.766 "dma_device_id": "system", 01:22:54.766 "dma_device_type": 1 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:54.766 "dma_device_type": 2 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "dma_device_id": "system", 01:22:54.766 "dma_device_type": 1 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:54.766 "dma_device_type": 2 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "dma_device_id": "system", 01:22:54.766 "dma_device_type": 1 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:22:54.766 "dma_device_type": 2 01:22:54.766 } 01:22:54.766 ], 01:22:54.766 "driver_specific": { 01:22:54.766 "raid": { 01:22:54.766 "uuid": "9fc8a35a-47d2-4882-85c4-c96f9e624ce0", 01:22:54.766 "strip_size_kb": 64, 01:22:54.766 "state": "online", 01:22:54.766 "raid_level": "raid0", 01:22:54.766 "superblock": true, 01:22:54.766 "num_base_bdevs": 3, 01:22:54.766 "num_base_bdevs_discovered": 3, 01:22:54.766 "num_base_bdevs_operational": 3, 01:22:54.766 "base_bdevs_list": [ 01:22:54.766 { 01:22:54.766 "name": "pt1", 01:22:54.766 "uuid": "00000000-0000-0000-0000-000000000001", 01:22:54.766 "is_configured": true, 01:22:54.766 "data_offset": 2048, 01:22:54.766 "data_size": 63488 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "name": "pt2", 01:22:54.766 "uuid": "00000000-0000-0000-0000-000000000002", 01:22:54.766 "is_configured": true, 01:22:54.766 "data_offset": 2048, 01:22:54.766 "data_size": 63488 01:22:54.766 }, 01:22:54.766 { 01:22:54.766 "name": "pt3", 01:22:54.766 "uuid": "00000000-0000-0000-0000-000000000003", 01:22:54.766 "is_configured": true, 01:22:54.766 "data_offset": 2048, 01:22:54.766 "data_size": 63488 01:22:54.766 } 01:22:54.766 ] 01:22:54.766 } 01:22:54.766 } 01:22:54.766 }' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:22:54.766 pt2 01:22:54.766 pt3' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.766 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:22:55.024 [2024-12-09 05:17:46.434989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9fc8a35a-47d2-4882-85c4-c96f9e624ce0 '!=' 9fc8a35a-47d2-4882-85c4-c96f9e624ce0 ']' 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64963 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64963 ']' 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64963 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64963 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64963' 01:22:55.024 killing process with pid 64963 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64963 01:22:55.024 05:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64963 01:22:55.024 [2024-12-09 05:17:46.528501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:22:55.024 [2024-12-09 05:17:46.528703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:22:55.024 [2024-12-09 05:17:46.528826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:22:55.024 [2024-12-09 05:17:46.528849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:22:55.588 [2024-12-09 05:17:46.916146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:22:57.023 05:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:22:57.023 01:22:57.023 real 0m6.502s 01:22:57.023 user 0m9.080s 01:22:57.023 sys 0m1.182s 01:22:57.023 ************************************ 01:22:57.023 END TEST raid_superblock_test 01:22:57.023 ************************************ 01:22:57.023 05:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:57.023 05:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:22:57.023 05:17:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 01:22:57.023 05:17:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:22:57.023 05:17:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:57.023 05:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:22:57.023 ************************************ 01:22:57.023 START TEST raid_read_error_test 01:22:57.023 ************************************ 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y1oagcHioT 01:22:57.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65227 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65227 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65227 ']' 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:57.023 05:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:57.281 [2024-12-09 05:17:48.752898] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:57.281 [2024-12-09 05:17:48.753194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65227 ] 01:22:57.539 [2024-12-09 05:17:48.954693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:57.539 [2024-12-09 05:17:49.134328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:58.104 [2024-12-09 05:17:49.457831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:58.104 [2024-12-09 05:17:49.458298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.362 BaseBdev1_malloc 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.362 true 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.362 [2024-12-09 05:17:49.818295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:22:58.362 [2024-12-09 05:17:49.818447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:58.362 [2024-12-09 05:17:49.818500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:22:58.362 [2024-12-09 05:17:49.818523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:58.362 [2024-12-09 05:17:49.822089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:58.362 [2024-12-09 05:17:49.822145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:22:58.362 BaseBdev1 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.362 BaseBdev2_malloc 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.362 true 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.362 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.363 [2024-12-09 05:17:49.900674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:22:58.363 [2024-12-09 05:17:49.900807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:58.363 [2024-12-09 05:17:49.900844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:22:58.363 [2024-12-09 05:17:49.900865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:58.363 [2024-12-09 05:17:49.904374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:58.363 [2024-12-09 05:17:49.904426] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:22:58.363 BaseBdev2 01:22:58.363 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.363 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:22:58.363 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:22:58.363 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.363 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.363 BaseBdev3_malloc 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.620 true 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.620 05:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.620 [2024-12-09 05:17:49.996670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:22:58.620 [2024-12-09 05:17:49.996846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:22:58.620 [2024-12-09 05:17:49.996898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:22:58.620 [2024-12-09 05:17:49.996927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:22:58.620 [2024-12-09 05:17:50.001517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:22:58.620 [2024-12-09 05:17:50.001625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:22:58.620 BaseBdev3 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.620 [2024-12-09 05:17:50.010210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:22:58.620 [2024-12-09 05:17:50.014412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:22:58.620 [2024-12-09 05:17:50.014596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:22:58.620 [2024-12-09 05:17:50.015137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:22:58.620 [2024-12-09 05:17:50.015163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:22:58.620 [2024-12-09 05:17:50.015751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 01:22:58.620 [2024-12-09 05:17:50.016107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:22:58.620 [2024-12-09 05:17:50.016145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:22:58.620 [2024-12-09 05:17:50.016569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.620 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:22:58.620 "name": "raid_bdev1", 01:22:58.620 "uuid": "c473ff73-9a71-43ed-be54-80be1d058abd", 01:22:58.620 "strip_size_kb": 64, 01:22:58.620 "state": "online", 01:22:58.620 "raid_level": "raid0", 01:22:58.620 "superblock": true, 01:22:58.621 "num_base_bdevs": 3, 01:22:58.621 "num_base_bdevs_discovered": 3, 01:22:58.621 "num_base_bdevs_operational": 3, 01:22:58.621 "base_bdevs_list": [ 01:22:58.621 { 01:22:58.621 "name": "BaseBdev1", 01:22:58.621 "uuid": "6c1e6fac-8f42-5ef9-9ed8-e83c6ca1b9d9", 01:22:58.621 "is_configured": true, 01:22:58.621 "data_offset": 2048, 01:22:58.621 "data_size": 63488 01:22:58.621 }, 01:22:58.621 { 01:22:58.621 "name": "BaseBdev2", 01:22:58.621 "uuid": "0d4d3332-73a8-5ee0-80c3-d4571b1b33ad", 01:22:58.621 "is_configured": true, 01:22:58.621 "data_offset": 2048, 01:22:58.621 "data_size": 63488 01:22:58.621 }, 01:22:58.621 { 01:22:58.621 "name": "BaseBdev3", 01:22:58.621 "uuid": "70f51d53-182a-5079-8b0c-7ac63a323f37", 01:22:58.621 "is_configured": true, 01:22:58.621 "data_offset": 2048, 01:22:58.621 "data_size": 63488 01:22:58.621 } 01:22:58.621 ] 01:22:58.621 }' 01:22:58.621 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:22:58.621 05:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:22:59.187 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:22:59.187 05:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:22:59.187 [2024-12-09 05:17:50.724886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:00.124 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:00.124 "name": "raid_bdev1", 01:23:00.124 "uuid": "c473ff73-9a71-43ed-be54-80be1d058abd", 01:23:00.124 "strip_size_kb": 64, 01:23:00.124 "state": "online", 01:23:00.124 "raid_level": "raid0", 01:23:00.124 "superblock": true, 01:23:00.124 "num_base_bdevs": 3, 01:23:00.124 "num_base_bdevs_discovered": 3, 01:23:00.124 "num_base_bdevs_operational": 3, 01:23:00.124 "base_bdevs_list": [ 01:23:00.124 { 01:23:00.124 "name": "BaseBdev1", 01:23:00.124 "uuid": "6c1e6fac-8f42-5ef9-9ed8-e83c6ca1b9d9", 01:23:00.124 "is_configured": true, 01:23:00.124 "data_offset": 2048, 01:23:00.124 "data_size": 63488 01:23:00.124 }, 01:23:00.124 { 01:23:00.124 "name": "BaseBdev2", 01:23:00.124 "uuid": "0d4d3332-73a8-5ee0-80c3-d4571b1b33ad", 01:23:00.124 "is_configured": true, 01:23:00.124 "data_offset": 2048, 01:23:00.124 "data_size": 63488 01:23:00.124 }, 01:23:00.125 { 01:23:00.125 "name": "BaseBdev3", 01:23:00.125 "uuid": "70f51d53-182a-5079-8b0c-7ac63a323f37", 01:23:00.125 "is_configured": true, 01:23:00.125 "data_offset": 2048, 01:23:00.125 "data_size": 63488 01:23:00.125 } 01:23:00.125 ] 01:23:00.125 }' 01:23:00.125 05:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:00.125 05:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:00.691 [2024-12-09 05:17:52.125091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:23:00.691 [2024-12-09 05:17:52.125452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:00.691 { 01:23:00.691 "results": [ 01:23:00.691 { 01:23:00.691 "job": "raid_bdev1", 01:23:00.691 "core_mask": "0x1", 01:23:00.691 "workload": "randrw", 01:23:00.691 "percentage": 50, 01:23:00.691 "status": "finished", 01:23:00.691 "queue_depth": 1, 01:23:00.691 "io_size": 131072, 01:23:00.691 "runtime": 1.397371, 01:23:00.691 "iops": 7708.046037881135, 01:23:00.691 "mibps": 963.5057547351419, 01:23:00.691 "io_failed": 1, 01:23:00.691 "io_timeout": 0, 01:23:00.691 "avg_latency_us": 185.22315768153123, 01:23:00.691 "min_latency_us": 33.512727272727275, 01:23:00.691 "max_latency_us": 1876.7127272727273 01:23:00.691 } 01:23:00.691 ], 01:23:00.691 "core_count": 1 01:23:00.691 } 01:23:00.691 [2024-12-09 05:17:52.129047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:00.691 [2024-12-09 05:17:52.129182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:00.691 [2024-12-09 05:17:52.129249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:00.691 [2024-12-09 05:17:52.129265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65227 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65227 ']' 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65227 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65227 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65227' 01:23:00.691 killing process with pid 65227 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65227 01:23:00.691 05:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65227 01:23:00.691 [2024-12-09 05:17:52.173253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:00.955 [2024-12-09 05:17:52.437414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:02.368 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y1oagcHioT 01:23:02.368 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:23:02.368 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:23:02.627 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 01:23:02.627 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 01:23:02.627 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:02.627 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:23:02.627 05:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 01:23:02.627 01:23:02.627 real 0m5.402s 01:23:02.627 user 0m6.246s 01:23:02.627 sys 0m0.993s 01:23:02.627 ************************************ 01:23:02.627 END TEST raid_read_error_test 01:23:02.627 ************************************ 01:23:02.628 05:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:02.628 05:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:02.628 05:17:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 01:23:02.628 05:17:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:23:02.628 05:17:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:02.628 05:17:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:02.628 ************************************ 01:23:02.628 START TEST raid_write_error_test 01:23:02.628 ************************************ 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VNx7BAm2TM 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65384 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65384 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65384 ']' 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:02.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:02.628 05:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:02.628 [2024-12-09 05:17:54.195498] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:02.628 [2024-12-09 05:17:54.195682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65384 ] 01:23:02.886 [2024-12-09 05:17:54.380218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:03.144 [2024-12-09 05:17:54.542586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:03.402 [2024-12-09 05:17:54.796695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:03.402 [2024-12-09 05:17:54.796766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.664 BaseBdev1_malloc 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.664 true 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.664 [2024-12-09 05:17:55.256162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:23:03.664 [2024-12-09 05:17:55.256268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:03.664 [2024-12-09 05:17:55.256303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:23:03.664 [2024-12-09 05:17:55.256338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:03.664 [2024-12-09 05:17:55.259563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:03.664 [2024-12-09 05:17:55.259613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:23:03.664 BaseBdev1 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.664 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 BaseBdev2_malloc 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 true 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 [2024-12-09 05:17:55.332747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:23:03.922 [2024-12-09 05:17:55.332846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:03.922 [2024-12-09 05:17:55.332872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:23:03.922 [2024-12-09 05:17:55.332891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:03.922 [2024-12-09 05:17:55.336098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:03.922 [2024-12-09 05:17:55.336147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:23:03.922 BaseBdev2 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 BaseBdev3_malloc 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 true 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 [2024-12-09 05:17:55.417547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:23:03.922 [2024-12-09 05:17:55.417641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:03.922 [2024-12-09 05:17:55.417671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:23:03.922 [2024-12-09 05:17:55.417690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:03.922 [2024-12-09 05:17:55.420888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:03.922 [2024-12-09 05:17:55.420937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:23:03.922 BaseBdev3 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 [2024-12-09 05:17:55.429699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:03.922 [2024-12-09 05:17:55.432377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:03.922 [2024-12-09 05:17:55.432501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:03.922 [2024-12-09 05:17:55.432803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:23:03.922 [2024-12-09 05:17:55.432832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:03.922 [2024-12-09 05:17:55.433164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 01:23:03.922 [2024-12-09 05:17:55.433432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:23:03.922 [2024-12-09 05:17:55.433465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:23:03.922 [2024-12-09 05:17:55.433727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.922 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:03.922 "name": "raid_bdev1", 01:23:03.922 "uuid": "a03c3521-56e5-4329-a8d9-343326168d74", 01:23:03.922 "strip_size_kb": 64, 01:23:03.922 "state": "online", 01:23:03.922 "raid_level": "raid0", 01:23:03.922 "superblock": true, 01:23:03.922 "num_base_bdevs": 3, 01:23:03.922 "num_base_bdevs_discovered": 3, 01:23:03.922 "num_base_bdevs_operational": 3, 01:23:03.922 "base_bdevs_list": [ 01:23:03.922 { 01:23:03.922 "name": "BaseBdev1", 01:23:03.922 "uuid": "838f7ef4-975f-50ea-bbcf-6920db87c37e", 01:23:03.922 "is_configured": true, 01:23:03.922 "data_offset": 2048, 01:23:03.922 "data_size": 63488 01:23:03.922 }, 01:23:03.922 { 01:23:03.922 "name": "BaseBdev2", 01:23:03.922 "uuid": "4be9a02b-3c0b-53c4-80ce-c4653869f191", 01:23:03.922 "is_configured": true, 01:23:03.922 "data_offset": 2048, 01:23:03.922 "data_size": 63488 01:23:03.923 }, 01:23:03.923 { 01:23:03.923 "name": "BaseBdev3", 01:23:03.923 "uuid": "cdacd12b-1097-5fea-99bc-d76bb2a8b3df", 01:23:03.923 "is_configured": true, 01:23:03.923 "data_offset": 2048, 01:23:03.923 "data_size": 63488 01:23:03.923 } 01:23:03.923 ] 01:23:03.923 }' 01:23:03.923 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:03.923 05:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:04.490 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:23:04.490 05:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:23:04.490 [2024-12-09 05:17:56.063590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:05.425 05:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:05.425 05:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:05.425 "name": "raid_bdev1", 01:23:05.425 "uuid": "a03c3521-56e5-4329-a8d9-343326168d74", 01:23:05.425 "strip_size_kb": 64, 01:23:05.425 "state": "online", 01:23:05.425 "raid_level": "raid0", 01:23:05.425 "superblock": true, 01:23:05.425 "num_base_bdevs": 3, 01:23:05.425 "num_base_bdevs_discovered": 3, 01:23:05.425 "num_base_bdevs_operational": 3, 01:23:05.425 "base_bdevs_list": [ 01:23:05.425 { 01:23:05.425 "name": "BaseBdev1", 01:23:05.425 "uuid": "838f7ef4-975f-50ea-bbcf-6920db87c37e", 01:23:05.425 "is_configured": true, 01:23:05.425 "data_offset": 2048, 01:23:05.425 "data_size": 63488 01:23:05.425 }, 01:23:05.425 { 01:23:05.425 "name": "BaseBdev2", 01:23:05.425 "uuid": "4be9a02b-3c0b-53c4-80ce-c4653869f191", 01:23:05.425 "is_configured": true, 01:23:05.425 "data_offset": 2048, 01:23:05.425 "data_size": 63488 01:23:05.425 }, 01:23:05.425 { 01:23:05.425 "name": "BaseBdev3", 01:23:05.425 "uuid": "cdacd12b-1097-5fea-99bc-d76bb2a8b3df", 01:23:05.425 "is_configured": true, 01:23:05.425 "data_offset": 2048, 01:23:05.425 "data_size": 63488 01:23:05.425 } 01:23:05.425 ] 01:23:05.425 }' 01:23:05.425 05:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:05.425 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:06.043 [2024-12-09 05:17:57.469688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:23:06.043 [2024-12-09 05:17:57.469735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:06.043 [2024-12-09 05:17:57.473210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:06.043 [2024-12-09 05:17:57.473287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:06.043 [2024-12-09 05:17:57.473342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:06.043 [2024-12-09 05:17:57.473358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:23:06.043 { 01:23:06.043 "results": [ 01:23:06.043 { 01:23:06.043 "job": "raid_bdev1", 01:23:06.043 "core_mask": "0x1", 01:23:06.043 "workload": "randrw", 01:23:06.043 "percentage": 50, 01:23:06.043 "status": "finished", 01:23:06.043 "queue_depth": 1, 01:23:06.043 "io_size": 131072, 01:23:06.043 "runtime": 1.403453, 01:23:06.043 "iops": 10078.71300285795, 01:23:06.043 "mibps": 1259.8391253572438, 01:23:06.043 "io_failed": 1, 01:23:06.043 "io_timeout": 0, 01:23:06.043 "avg_latency_us": 138.66337082117656, 01:23:06.043 "min_latency_us": 39.79636363636364, 01:23:06.043 "max_latency_us": 1861.8181818181818 01:23:06.043 } 01:23:06.043 ], 01:23:06.043 "core_count": 1 01:23:06.043 } 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65384 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65384 ']' 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65384 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65384 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:06.043 killing process with pid 65384 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65384' 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65384 01:23:06.043 [2024-12-09 05:17:57.514262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:06.043 05:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65384 01:23:06.300 [2024-12-09 05:17:57.716587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VNx7BAm2TM 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 01:23:07.674 01:23:07.674 real 0m4.834s 01:23:07.674 user 0m5.831s 01:23:07.674 sys 0m0.748s 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:07.674 05:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:07.674 ************************************ 01:23:07.674 END TEST raid_write_error_test 01:23:07.674 ************************************ 01:23:07.674 05:17:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:23:07.674 05:17:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 01:23:07.674 05:17:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:23:07.674 05:17:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:07.674 05:17:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:07.674 ************************************ 01:23:07.674 START TEST raid_state_function_test 01:23:07.674 ************************************ 01:23:07.674 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 01:23:07.674 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 01:23:07.674 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:23:07.674 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:23:07.674 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65523 01:23:07.675 Process raid pid: 65523 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65523' 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65523 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65523 ']' 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:07.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:07.675 05:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:07.675 [2024-12-09 05:17:59.035747] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:07.675 [2024-12-09 05:17:59.035954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:07.675 [2024-12-09 05:17:59.212655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:07.933 [2024-12-09 05:17:59.348242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:08.192 [2024-12-09 05:17:59.563944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:08.192 [2024-12-09 05:17:59.563997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:08.450 [2024-12-09 05:18:00.060757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:08.450 [2024-12-09 05:18:00.060838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:08.450 [2024-12-09 05:18:00.060856] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:08.450 [2024-12-09 05:18:00.060872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:08.450 [2024-12-09 05:18:00.060883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:08.450 [2024-12-09 05:18:00.060897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:08.450 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:08.709 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:08.709 "name": "Existed_Raid", 01:23:08.709 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:08.709 "strip_size_kb": 64, 01:23:08.709 "state": "configuring", 01:23:08.709 "raid_level": "concat", 01:23:08.709 "superblock": false, 01:23:08.709 "num_base_bdevs": 3, 01:23:08.709 "num_base_bdevs_discovered": 0, 01:23:08.709 "num_base_bdevs_operational": 3, 01:23:08.709 "base_bdevs_list": [ 01:23:08.710 { 01:23:08.710 "name": "BaseBdev1", 01:23:08.710 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:08.710 "is_configured": false, 01:23:08.710 "data_offset": 0, 01:23:08.710 "data_size": 0 01:23:08.710 }, 01:23:08.710 { 01:23:08.710 "name": "BaseBdev2", 01:23:08.710 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:08.710 "is_configured": false, 01:23:08.710 "data_offset": 0, 01:23:08.710 "data_size": 0 01:23:08.710 }, 01:23:08.710 { 01:23:08.710 "name": "BaseBdev3", 01:23:08.710 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:08.710 "is_configured": false, 01:23:08.710 "data_offset": 0, 01:23:08.710 "data_size": 0 01:23:08.710 } 01:23:08.710 ] 01:23:08.710 }' 01:23:08.710 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:08.710 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 [2024-12-09 05:18:00.596854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:09.278 [2024-12-09 05:18:00.596919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 [2024-12-09 05:18:00.604828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:09.278 [2024-12-09 05:18:00.604885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:09.278 [2024-12-09 05:18:00.604901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:09.278 [2024-12-09 05:18:00.604917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:09.278 [2024-12-09 05:18:00.604927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:09.278 [2024-12-09 05:18:00.604942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 [2024-12-09 05:18:00.650257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:09.278 BaseBdev1 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 [ 01:23:09.278 { 01:23:09.278 "name": "BaseBdev1", 01:23:09.278 "aliases": [ 01:23:09.278 "671aaced-7110-435b-ad13-5f4c28ba65d2" 01:23:09.278 ], 01:23:09.278 "product_name": "Malloc disk", 01:23:09.278 "block_size": 512, 01:23:09.278 "num_blocks": 65536, 01:23:09.278 "uuid": "671aaced-7110-435b-ad13-5f4c28ba65d2", 01:23:09.278 "assigned_rate_limits": { 01:23:09.278 "rw_ios_per_sec": 0, 01:23:09.278 "rw_mbytes_per_sec": 0, 01:23:09.278 "r_mbytes_per_sec": 0, 01:23:09.278 "w_mbytes_per_sec": 0 01:23:09.278 }, 01:23:09.278 "claimed": true, 01:23:09.278 "claim_type": "exclusive_write", 01:23:09.278 "zoned": false, 01:23:09.278 "supported_io_types": { 01:23:09.278 "read": true, 01:23:09.278 "write": true, 01:23:09.278 "unmap": true, 01:23:09.278 "flush": true, 01:23:09.278 "reset": true, 01:23:09.278 "nvme_admin": false, 01:23:09.278 "nvme_io": false, 01:23:09.278 "nvme_io_md": false, 01:23:09.278 "write_zeroes": true, 01:23:09.278 "zcopy": true, 01:23:09.278 "get_zone_info": false, 01:23:09.278 "zone_management": false, 01:23:09.278 "zone_append": false, 01:23:09.278 "compare": false, 01:23:09.278 "compare_and_write": false, 01:23:09.278 "abort": true, 01:23:09.278 "seek_hole": false, 01:23:09.278 "seek_data": false, 01:23:09.278 "copy": true, 01:23:09.278 "nvme_iov_md": false 01:23:09.278 }, 01:23:09.278 "memory_domains": [ 01:23:09.278 { 01:23:09.278 "dma_device_id": "system", 01:23:09.278 "dma_device_type": 1 01:23:09.278 }, 01:23:09.278 { 01:23:09.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:09.278 "dma_device_type": 2 01:23:09.278 } 01:23:09.278 ], 01:23:09.278 "driver_specific": {} 01:23:09.278 } 01:23:09.278 ] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:09.278 "name": "Existed_Raid", 01:23:09.278 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:09.278 "strip_size_kb": 64, 01:23:09.278 "state": "configuring", 01:23:09.278 "raid_level": "concat", 01:23:09.278 "superblock": false, 01:23:09.278 "num_base_bdevs": 3, 01:23:09.278 "num_base_bdevs_discovered": 1, 01:23:09.278 "num_base_bdevs_operational": 3, 01:23:09.278 "base_bdevs_list": [ 01:23:09.278 { 01:23:09.278 "name": "BaseBdev1", 01:23:09.278 "uuid": "671aaced-7110-435b-ad13-5f4c28ba65d2", 01:23:09.278 "is_configured": true, 01:23:09.278 "data_offset": 0, 01:23:09.278 "data_size": 65536 01:23:09.278 }, 01:23:09.278 { 01:23:09.278 "name": "BaseBdev2", 01:23:09.278 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:09.278 "is_configured": false, 01:23:09.278 "data_offset": 0, 01:23:09.278 "data_size": 0 01:23:09.278 }, 01:23:09.278 { 01:23:09.278 "name": "BaseBdev3", 01:23:09.278 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:09.278 "is_configured": false, 01:23:09.278 "data_offset": 0, 01:23:09.278 "data_size": 0 01:23:09.278 } 01:23:09.278 ] 01:23:09.278 }' 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:09.278 05:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.845 [2024-12-09 05:18:01.194495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:09.845 [2024-12-09 05:18:01.194569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.845 [2024-12-09 05:18:01.202517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:09.845 [2024-12-09 05:18:01.204986] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:09.845 [2024-12-09 05:18:01.205039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:09.845 [2024-12-09 05:18:01.205088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:09.845 [2024-12-09 05:18:01.205103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:09.845 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:09.846 "name": "Existed_Raid", 01:23:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:09.846 "strip_size_kb": 64, 01:23:09.846 "state": "configuring", 01:23:09.846 "raid_level": "concat", 01:23:09.846 "superblock": false, 01:23:09.846 "num_base_bdevs": 3, 01:23:09.846 "num_base_bdevs_discovered": 1, 01:23:09.846 "num_base_bdevs_operational": 3, 01:23:09.846 "base_bdevs_list": [ 01:23:09.846 { 01:23:09.846 "name": "BaseBdev1", 01:23:09.846 "uuid": "671aaced-7110-435b-ad13-5f4c28ba65d2", 01:23:09.846 "is_configured": true, 01:23:09.846 "data_offset": 0, 01:23:09.846 "data_size": 65536 01:23:09.846 }, 01:23:09.846 { 01:23:09.846 "name": "BaseBdev2", 01:23:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:09.846 "is_configured": false, 01:23:09.846 "data_offset": 0, 01:23:09.846 "data_size": 0 01:23:09.846 }, 01:23:09.846 { 01:23:09.846 "name": "BaseBdev3", 01:23:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:09.846 "is_configured": false, 01:23:09.846 "data_offset": 0, 01:23:09.846 "data_size": 0 01:23:09.846 } 01:23:09.846 ] 01:23:09.846 }' 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:09.846 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.420 [2024-12-09 05:18:01.765400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:10.420 BaseBdev2 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.420 [ 01:23:10.420 { 01:23:10.420 "name": "BaseBdev2", 01:23:10.420 "aliases": [ 01:23:10.420 "454ec810-57ff-4fb8-bd31-2266c30087c2" 01:23:10.420 ], 01:23:10.420 "product_name": "Malloc disk", 01:23:10.420 "block_size": 512, 01:23:10.420 "num_blocks": 65536, 01:23:10.420 "uuid": "454ec810-57ff-4fb8-bd31-2266c30087c2", 01:23:10.420 "assigned_rate_limits": { 01:23:10.420 "rw_ios_per_sec": 0, 01:23:10.420 "rw_mbytes_per_sec": 0, 01:23:10.420 "r_mbytes_per_sec": 0, 01:23:10.420 "w_mbytes_per_sec": 0 01:23:10.420 }, 01:23:10.420 "claimed": true, 01:23:10.420 "claim_type": "exclusive_write", 01:23:10.420 "zoned": false, 01:23:10.420 "supported_io_types": { 01:23:10.420 "read": true, 01:23:10.420 "write": true, 01:23:10.420 "unmap": true, 01:23:10.420 "flush": true, 01:23:10.420 "reset": true, 01:23:10.420 "nvme_admin": false, 01:23:10.420 "nvme_io": false, 01:23:10.420 "nvme_io_md": false, 01:23:10.420 "write_zeroes": true, 01:23:10.420 "zcopy": true, 01:23:10.420 "get_zone_info": false, 01:23:10.420 "zone_management": false, 01:23:10.420 "zone_append": false, 01:23:10.420 "compare": false, 01:23:10.420 "compare_and_write": false, 01:23:10.420 "abort": true, 01:23:10.420 "seek_hole": false, 01:23:10.420 "seek_data": false, 01:23:10.420 "copy": true, 01:23:10.420 "nvme_iov_md": false 01:23:10.420 }, 01:23:10.420 "memory_domains": [ 01:23:10.420 { 01:23:10.420 "dma_device_id": "system", 01:23:10.420 "dma_device_type": 1 01:23:10.420 }, 01:23:10.420 { 01:23:10.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:10.420 "dma_device_type": 2 01:23:10.420 } 01:23:10.420 ], 01:23:10.420 "driver_specific": {} 01:23:10.420 } 01:23:10.420 ] 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.420 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:10.420 "name": "Existed_Raid", 01:23:10.420 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:10.420 "strip_size_kb": 64, 01:23:10.420 "state": "configuring", 01:23:10.420 "raid_level": "concat", 01:23:10.420 "superblock": false, 01:23:10.420 "num_base_bdevs": 3, 01:23:10.420 "num_base_bdevs_discovered": 2, 01:23:10.421 "num_base_bdevs_operational": 3, 01:23:10.421 "base_bdevs_list": [ 01:23:10.421 { 01:23:10.421 "name": "BaseBdev1", 01:23:10.421 "uuid": "671aaced-7110-435b-ad13-5f4c28ba65d2", 01:23:10.421 "is_configured": true, 01:23:10.421 "data_offset": 0, 01:23:10.421 "data_size": 65536 01:23:10.421 }, 01:23:10.421 { 01:23:10.421 "name": "BaseBdev2", 01:23:10.421 "uuid": "454ec810-57ff-4fb8-bd31-2266c30087c2", 01:23:10.421 "is_configured": true, 01:23:10.421 "data_offset": 0, 01:23:10.421 "data_size": 65536 01:23:10.421 }, 01:23:10.421 { 01:23:10.421 "name": "BaseBdev3", 01:23:10.421 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:10.421 "is_configured": false, 01:23:10.421 "data_offset": 0, 01:23:10.421 "data_size": 0 01:23:10.421 } 01:23:10.421 ] 01:23:10.421 }' 01:23:10.421 05:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:10.421 05:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.996 [2024-12-09 05:18:02.361548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:10.996 [2024-12-09 05:18:02.361614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:23:10.996 [2024-12-09 05:18:02.361635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:23:10.996 [2024-12-09 05:18:02.361999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:23:10.996 [2024-12-09 05:18:02.362231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:23:10.996 [2024-12-09 05:18:02.362260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:23:10.996 [2024-12-09 05:18:02.362620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:10.996 BaseBdev3 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.996 [ 01:23:10.996 { 01:23:10.996 "name": "BaseBdev3", 01:23:10.996 "aliases": [ 01:23:10.996 "97c3743c-fec4-41d9-bcc9-4b6a22932fe6" 01:23:10.996 ], 01:23:10.996 "product_name": "Malloc disk", 01:23:10.996 "block_size": 512, 01:23:10.996 "num_blocks": 65536, 01:23:10.996 "uuid": "97c3743c-fec4-41d9-bcc9-4b6a22932fe6", 01:23:10.996 "assigned_rate_limits": { 01:23:10.996 "rw_ios_per_sec": 0, 01:23:10.996 "rw_mbytes_per_sec": 0, 01:23:10.996 "r_mbytes_per_sec": 0, 01:23:10.996 "w_mbytes_per_sec": 0 01:23:10.996 }, 01:23:10.996 "claimed": true, 01:23:10.996 "claim_type": "exclusive_write", 01:23:10.996 "zoned": false, 01:23:10.996 "supported_io_types": { 01:23:10.996 "read": true, 01:23:10.996 "write": true, 01:23:10.996 "unmap": true, 01:23:10.996 "flush": true, 01:23:10.996 "reset": true, 01:23:10.996 "nvme_admin": false, 01:23:10.996 "nvme_io": false, 01:23:10.996 "nvme_io_md": false, 01:23:10.996 "write_zeroes": true, 01:23:10.996 "zcopy": true, 01:23:10.996 "get_zone_info": false, 01:23:10.996 "zone_management": false, 01:23:10.996 "zone_append": false, 01:23:10.996 "compare": false, 01:23:10.996 "compare_and_write": false, 01:23:10.996 "abort": true, 01:23:10.996 "seek_hole": false, 01:23:10.996 "seek_data": false, 01:23:10.996 "copy": true, 01:23:10.996 "nvme_iov_md": false 01:23:10.996 }, 01:23:10.996 "memory_domains": [ 01:23:10.996 { 01:23:10.996 "dma_device_id": "system", 01:23:10.996 "dma_device_type": 1 01:23:10.996 }, 01:23:10.996 { 01:23:10.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:10.996 "dma_device_type": 2 01:23:10.996 } 01:23:10.996 ], 01:23:10.996 "driver_specific": {} 01:23:10.996 } 01:23:10.996 ] 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:10.996 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:10.997 "name": "Existed_Raid", 01:23:10.997 "uuid": "e4867b0e-f2a7-4736-be24-9f9bdcb62170", 01:23:10.997 "strip_size_kb": 64, 01:23:10.997 "state": "online", 01:23:10.997 "raid_level": "concat", 01:23:10.997 "superblock": false, 01:23:10.997 "num_base_bdevs": 3, 01:23:10.997 "num_base_bdevs_discovered": 3, 01:23:10.997 "num_base_bdevs_operational": 3, 01:23:10.997 "base_bdevs_list": [ 01:23:10.997 { 01:23:10.997 "name": "BaseBdev1", 01:23:10.997 "uuid": "671aaced-7110-435b-ad13-5f4c28ba65d2", 01:23:10.997 "is_configured": true, 01:23:10.997 "data_offset": 0, 01:23:10.997 "data_size": 65536 01:23:10.997 }, 01:23:10.997 { 01:23:10.997 "name": "BaseBdev2", 01:23:10.997 "uuid": "454ec810-57ff-4fb8-bd31-2266c30087c2", 01:23:10.997 "is_configured": true, 01:23:10.997 "data_offset": 0, 01:23:10.997 "data_size": 65536 01:23:10.997 }, 01:23:10.997 { 01:23:10.997 "name": "BaseBdev3", 01:23:10.997 "uuid": "97c3743c-fec4-41d9-bcc9-4b6a22932fe6", 01:23:10.997 "is_configured": true, 01:23:10.997 "data_offset": 0, 01:23:10.997 "data_size": 65536 01:23:10.997 } 01:23:10.997 ] 01:23:10.997 }' 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:10.997 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.563 [2024-12-09 05:18:02.922131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.563 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:11.563 "name": "Existed_Raid", 01:23:11.563 "aliases": [ 01:23:11.563 "e4867b0e-f2a7-4736-be24-9f9bdcb62170" 01:23:11.563 ], 01:23:11.563 "product_name": "Raid Volume", 01:23:11.563 "block_size": 512, 01:23:11.563 "num_blocks": 196608, 01:23:11.563 "uuid": "e4867b0e-f2a7-4736-be24-9f9bdcb62170", 01:23:11.563 "assigned_rate_limits": { 01:23:11.563 "rw_ios_per_sec": 0, 01:23:11.563 "rw_mbytes_per_sec": 0, 01:23:11.563 "r_mbytes_per_sec": 0, 01:23:11.563 "w_mbytes_per_sec": 0 01:23:11.563 }, 01:23:11.563 "claimed": false, 01:23:11.563 "zoned": false, 01:23:11.563 "supported_io_types": { 01:23:11.563 "read": true, 01:23:11.563 "write": true, 01:23:11.563 "unmap": true, 01:23:11.563 "flush": true, 01:23:11.563 "reset": true, 01:23:11.563 "nvme_admin": false, 01:23:11.563 "nvme_io": false, 01:23:11.563 "nvme_io_md": false, 01:23:11.563 "write_zeroes": true, 01:23:11.563 "zcopy": false, 01:23:11.563 "get_zone_info": false, 01:23:11.563 "zone_management": false, 01:23:11.563 "zone_append": false, 01:23:11.563 "compare": false, 01:23:11.563 "compare_and_write": false, 01:23:11.563 "abort": false, 01:23:11.563 "seek_hole": false, 01:23:11.563 "seek_data": false, 01:23:11.563 "copy": false, 01:23:11.563 "nvme_iov_md": false 01:23:11.563 }, 01:23:11.563 "memory_domains": [ 01:23:11.563 { 01:23:11.563 "dma_device_id": "system", 01:23:11.563 "dma_device_type": 1 01:23:11.563 }, 01:23:11.563 { 01:23:11.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:11.563 "dma_device_type": 2 01:23:11.563 }, 01:23:11.563 { 01:23:11.563 "dma_device_id": "system", 01:23:11.563 "dma_device_type": 1 01:23:11.563 }, 01:23:11.563 { 01:23:11.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:11.563 "dma_device_type": 2 01:23:11.563 }, 01:23:11.563 { 01:23:11.563 "dma_device_id": "system", 01:23:11.563 "dma_device_type": 1 01:23:11.563 }, 01:23:11.563 { 01:23:11.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:11.563 "dma_device_type": 2 01:23:11.563 } 01:23:11.563 ], 01:23:11.563 "driver_specific": { 01:23:11.563 "raid": { 01:23:11.563 "uuid": "e4867b0e-f2a7-4736-be24-9f9bdcb62170", 01:23:11.563 "strip_size_kb": 64, 01:23:11.563 "state": "online", 01:23:11.563 "raid_level": "concat", 01:23:11.563 "superblock": false, 01:23:11.563 "num_base_bdevs": 3, 01:23:11.564 "num_base_bdevs_discovered": 3, 01:23:11.564 "num_base_bdevs_operational": 3, 01:23:11.564 "base_bdevs_list": [ 01:23:11.564 { 01:23:11.564 "name": "BaseBdev1", 01:23:11.564 "uuid": "671aaced-7110-435b-ad13-5f4c28ba65d2", 01:23:11.564 "is_configured": true, 01:23:11.564 "data_offset": 0, 01:23:11.564 "data_size": 65536 01:23:11.564 }, 01:23:11.564 { 01:23:11.564 "name": "BaseBdev2", 01:23:11.564 "uuid": "454ec810-57ff-4fb8-bd31-2266c30087c2", 01:23:11.564 "is_configured": true, 01:23:11.564 "data_offset": 0, 01:23:11.564 "data_size": 65536 01:23:11.564 }, 01:23:11.564 { 01:23:11.564 "name": "BaseBdev3", 01:23:11.564 "uuid": "97c3743c-fec4-41d9-bcc9-4b6a22932fe6", 01:23:11.564 "is_configured": true, 01:23:11.564 "data_offset": 0, 01:23:11.564 "data_size": 65536 01:23:11.564 } 01:23:11.564 ] 01:23:11.564 } 01:23:11.564 } 01:23:11.564 }' 01:23:11.564 05:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:23:11.564 BaseBdev2 01:23:11.564 BaseBdev3' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.564 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:23:11.822 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.823 [2024-12-09 05:18:03.241893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:23:11.823 [2024-12-09 05:18:03.241927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:11.823 [2024-12-09 05:18:03.242036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:11.823 "name": "Existed_Raid", 01:23:11.823 "uuid": "e4867b0e-f2a7-4736-be24-9f9bdcb62170", 01:23:11.823 "strip_size_kb": 64, 01:23:11.823 "state": "offline", 01:23:11.823 "raid_level": "concat", 01:23:11.823 "superblock": false, 01:23:11.823 "num_base_bdevs": 3, 01:23:11.823 "num_base_bdevs_discovered": 2, 01:23:11.823 "num_base_bdevs_operational": 2, 01:23:11.823 "base_bdevs_list": [ 01:23:11.823 { 01:23:11.823 "name": null, 01:23:11.823 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:11.823 "is_configured": false, 01:23:11.823 "data_offset": 0, 01:23:11.823 "data_size": 65536 01:23:11.823 }, 01:23:11.823 { 01:23:11.823 "name": "BaseBdev2", 01:23:11.823 "uuid": "454ec810-57ff-4fb8-bd31-2266c30087c2", 01:23:11.823 "is_configured": true, 01:23:11.823 "data_offset": 0, 01:23:11.823 "data_size": 65536 01:23:11.823 }, 01:23:11.823 { 01:23:11.823 "name": "BaseBdev3", 01:23:11.823 "uuid": "97c3743c-fec4-41d9-bcc9-4b6a22932fe6", 01:23:11.823 "is_configured": true, 01:23:11.823 "data_offset": 0, 01:23:11.823 "data_size": 65536 01:23:11.823 } 01:23:11.823 ] 01:23:11.823 }' 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:11.823 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.391 05:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.391 [2024-12-09 05:18:03.912330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:23:12.391 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.391 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:23:12.391 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.650 [2024-12-09 05:18:04.061928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:23:12.650 [2024-12-09 05:18:04.061996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.650 BaseBdev2 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.650 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.909 [ 01:23:12.909 { 01:23:12.909 "name": "BaseBdev2", 01:23:12.909 "aliases": [ 01:23:12.909 "39f99f65-c132-465f-bd24-55ebe59298a0" 01:23:12.909 ], 01:23:12.909 "product_name": "Malloc disk", 01:23:12.909 "block_size": 512, 01:23:12.909 "num_blocks": 65536, 01:23:12.909 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:12.909 "assigned_rate_limits": { 01:23:12.909 "rw_ios_per_sec": 0, 01:23:12.909 "rw_mbytes_per_sec": 0, 01:23:12.909 "r_mbytes_per_sec": 0, 01:23:12.909 "w_mbytes_per_sec": 0 01:23:12.909 }, 01:23:12.909 "claimed": false, 01:23:12.909 "zoned": false, 01:23:12.909 "supported_io_types": { 01:23:12.909 "read": true, 01:23:12.909 "write": true, 01:23:12.909 "unmap": true, 01:23:12.909 "flush": true, 01:23:12.909 "reset": true, 01:23:12.909 "nvme_admin": false, 01:23:12.909 "nvme_io": false, 01:23:12.909 "nvme_io_md": false, 01:23:12.909 "write_zeroes": true, 01:23:12.909 "zcopy": true, 01:23:12.909 "get_zone_info": false, 01:23:12.909 "zone_management": false, 01:23:12.909 "zone_append": false, 01:23:12.909 "compare": false, 01:23:12.909 "compare_and_write": false, 01:23:12.909 "abort": true, 01:23:12.909 "seek_hole": false, 01:23:12.909 "seek_data": false, 01:23:12.909 "copy": true, 01:23:12.909 "nvme_iov_md": false 01:23:12.909 }, 01:23:12.909 "memory_domains": [ 01:23:12.909 { 01:23:12.909 "dma_device_id": "system", 01:23:12.909 "dma_device_type": 1 01:23:12.909 }, 01:23:12.909 { 01:23:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:12.909 "dma_device_type": 2 01:23:12.909 } 01:23:12.909 ], 01:23:12.909 "driver_specific": {} 01:23:12.909 } 01:23:12.909 ] 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.909 BaseBdev3 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.909 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.909 [ 01:23:12.909 { 01:23:12.909 "name": "BaseBdev3", 01:23:12.909 "aliases": [ 01:23:12.909 "fe349fd3-03a4-4234-8c4b-cd2909395494" 01:23:12.909 ], 01:23:12.909 "product_name": "Malloc disk", 01:23:12.909 "block_size": 512, 01:23:12.909 "num_blocks": 65536, 01:23:12.909 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:12.909 "assigned_rate_limits": { 01:23:12.909 "rw_ios_per_sec": 0, 01:23:12.909 "rw_mbytes_per_sec": 0, 01:23:12.909 "r_mbytes_per_sec": 0, 01:23:12.909 "w_mbytes_per_sec": 0 01:23:12.909 }, 01:23:12.909 "claimed": false, 01:23:12.909 "zoned": false, 01:23:12.909 "supported_io_types": { 01:23:12.909 "read": true, 01:23:12.909 "write": true, 01:23:12.909 "unmap": true, 01:23:12.909 "flush": true, 01:23:12.909 "reset": true, 01:23:12.909 "nvme_admin": false, 01:23:12.909 "nvme_io": false, 01:23:12.909 "nvme_io_md": false, 01:23:12.909 "write_zeroes": true, 01:23:12.909 "zcopy": true, 01:23:12.909 "get_zone_info": false, 01:23:12.909 "zone_management": false, 01:23:12.909 "zone_append": false, 01:23:12.909 "compare": false, 01:23:12.909 "compare_and_write": false, 01:23:12.909 "abort": true, 01:23:12.909 "seek_hole": false, 01:23:12.909 "seek_data": false, 01:23:12.909 "copy": true, 01:23:12.909 "nvme_iov_md": false 01:23:12.909 }, 01:23:12.909 "memory_domains": [ 01:23:12.910 { 01:23:12.910 "dma_device_id": "system", 01:23:12.910 "dma_device_type": 1 01:23:12.910 }, 01:23:12.910 { 01:23:12.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:12.910 "dma_device_type": 2 01:23:12.910 } 01:23:12.910 ], 01:23:12.910 "driver_specific": {} 01:23:12.910 } 01:23:12.910 ] 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.910 [2024-12-09 05:18:04.358286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:12.910 [2024-12-09 05:18:04.358481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:12.910 [2024-12-09 05:18:04.358617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:12.910 [2024-12-09 05:18:04.361055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:12.910 "name": "Existed_Raid", 01:23:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:12.910 "strip_size_kb": 64, 01:23:12.910 "state": "configuring", 01:23:12.910 "raid_level": "concat", 01:23:12.910 "superblock": false, 01:23:12.910 "num_base_bdevs": 3, 01:23:12.910 "num_base_bdevs_discovered": 2, 01:23:12.910 "num_base_bdevs_operational": 3, 01:23:12.910 "base_bdevs_list": [ 01:23:12.910 { 01:23:12.910 "name": "BaseBdev1", 01:23:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:12.910 "is_configured": false, 01:23:12.910 "data_offset": 0, 01:23:12.910 "data_size": 0 01:23:12.910 }, 01:23:12.910 { 01:23:12.910 "name": "BaseBdev2", 01:23:12.910 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:12.910 "is_configured": true, 01:23:12.910 "data_offset": 0, 01:23:12.910 "data_size": 65536 01:23:12.910 }, 01:23:12.910 { 01:23:12.910 "name": "BaseBdev3", 01:23:12.910 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:12.910 "is_configured": true, 01:23:12.910 "data_offset": 0, 01:23:12.910 "data_size": 65536 01:23:12.910 } 01:23:12.910 ] 01:23:12.910 }' 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:12.910 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:13.477 [2024-12-09 05:18:04.902484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:13.477 "name": "Existed_Raid", 01:23:13.477 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:13.477 "strip_size_kb": 64, 01:23:13.477 "state": "configuring", 01:23:13.477 "raid_level": "concat", 01:23:13.477 "superblock": false, 01:23:13.477 "num_base_bdevs": 3, 01:23:13.477 "num_base_bdevs_discovered": 1, 01:23:13.477 "num_base_bdevs_operational": 3, 01:23:13.477 "base_bdevs_list": [ 01:23:13.477 { 01:23:13.477 "name": "BaseBdev1", 01:23:13.477 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:13.477 "is_configured": false, 01:23:13.477 "data_offset": 0, 01:23:13.477 "data_size": 0 01:23:13.477 }, 01:23:13.477 { 01:23:13.477 "name": null, 01:23:13.477 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:13.477 "is_configured": false, 01:23:13.477 "data_offset": 0, 01:23:13.477 "data_size": 65536 01:23:13.477 }, 01:23:13.477 { 01:23:13.477 "name": "BaseBdev3", 01:23:13.477 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:13.477 "is_configured": true, 01:23:13.477 "data_offset": 0, 01:23:13.477 "data_size": 65536 01:23:13.477 } 01:23:13.477 ] 01:23:13.477 }' 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:13.477 05:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.042 [2024-12-09 05:18:05.531201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:14.042 BaseBdev1 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:14.042 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.043 [ 01:23:14.043 { 01:23:14.043 "name": "BaseBdev1", 01:23:14.043 "aliases": [ 01:23:14.043 "2aa03254-9a15-4bc4-8041-ce198e71bd66" 01:23:14.043 ], 01:23:14.043 "product_name": "Malloc disk", 01:23:14.043 "block_size": 512, 01:23:14.043 "num_blocks": 65536, 01:23:14.043 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:14.043 "assigned_rate_limits": { 01:23:14.043 "rw_ios_per_sec": 0, 01:23:14.043 "rw_mbytes_per_sec": 0, 01:23:14.043 "r_mbytes_per_sec": 0, 01:23:14.043 "w_mbytes_per_sec": 0 01:23:14.043 }, 01:23:14.043 "claimed": true, 01:23:14.043 "claim_type": "exclusive_write", 01:23:14.043 "zoned": false, 01:23:14.043 "supported_io_types": { 01:23:14.043 "read": true, 01:23:14.043 "write": true, 01:23:14.043 "unmap": true, 01:23:14.043 "flush": true, 01:23:14.043 "reset": true, 01:23:14.043 "nvme_admin": false, 01:23:14.043 "nvme_io": false, 01:23:14.043 "nvme_io_md": false, 01:23:14.043 "write_zeroes": true, 01:23:14.043 "zcopy": true, 01:23:14.043 "get_zone_info": false, 01:23:14.043 "zone_management": false, 01:23:14.043 "zone_append": false, 01:23:14.043 "compare": false, 01:23:14.043 "compare_and_write": false, 01:23:14.043 "abort": true, 01:23:14.043 "seek_hole": false, 01:23:14.043 "seek_data": false, 01:23:14.043 "copy": true, 01:23:14.043 "nvme_iov_md": false 01:23:14.043 }, 01:23:14.043 "memory_domains": [ 01:23:14.043 { 01:23:14.043 "dma_device_id": "system", 01:23:14.043 "dma_device_type": 1 01:23:14.043 }, 01:23:14.043 { 01:23:14.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:14.043 "dma_device_type": 2 01:23:14.043 } 01:23:14.043 ], 01:23:14.043 "driver_specific": {} 01:23:14.043 } 01:23:14.043 ] 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:14.043 "name": "Existed_Raid", 01:23:14.043 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:14.043 "strip_size_kb": 64, 01:23:14.043 "state": "configuring", 01:23:14.043 "raid_level": "concat", 01:23:14.043 "superblock": false, 01:23:14.043 "num_base_bdevs": 3, 01:23:14.043 "num_base_bdevs_discovered": 2, 01:23:14.043 "num_base_bdevs_operational": 3, 01:23:14.043 "base_bdevs_list": [ 01:23:14.043 { 01:23:14.043 "name": "BaseBdev1", 01:23:14.043 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:14.043 "is_configured": true, 01:23:14.043 "data_offset": 0, 01:23:14.043 "data_size": 65536 01:23:14.043 }, 01:23:14.043 { 01:23:14.043 "name": null, 01:23:14.043 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:14.043 "is_configured": false, 01:23:14.043 "data_offset": 0, 01:23:14.043 "data_size": 65536 01:23:14.043 }, 01:23:14.043 { 01:23:14.043 "name": "BaseBdev3", 01:23:14.043 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:14.043 "is_configured": true, 01:23:14.043 "data_offset": 0, 01:23:14.043 "data_size": 65536 01:23:14.043 } 01:23:14.043 ] 01:23:14.043 }' 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:14.043 05:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.608 [2024-12-09 05:18:06.163460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:14.608 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.866 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:14.866 "name": "Existed_Raid", 01:23:14.866 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:14.866 "strip_size_kb": 64, 01:23:14.866 "state": "configuring", 01:23:14.866 "raid_level": "concat", 01:23:14.866 "superblock": false, 01:23:14.866 "num_base_bdevs": 3, 01:23:14.866 "num_base_bdevs_discovered": 1, 01:23:14.866 "num_base_bdevs_operational": 3, 01:23:14.866 "base_bdevs_list": [ 01:23:14.866 { 01:23:14.866 "name": "BaseBdev1", 01:23:14.866 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:14.866 "is_configured": true, 01:23:14.866 "data_offset": 0, 01:23:14.866 "data_size": 65536 01:23:14.866 }, 01:23:14.866 { 01:23:14.866 "name": null, 01:23:14.866 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:14.866 "is_configured": false, 01:23:14.866 "data_offset": 0, 01:23:14.866 "data_size": 65536 01:23:14.866 }, 01:23:14.866 { 01:23:14.866 "name": null, 01:23:14.866 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:14.866 "is_configured": false, 01:23:14.866 "data_offset": 0, 01:23:14.866 "data_size": 65536 01:23:14.866 } 01:23:14.866 ] 01:23:14.866 }' 01:23:14.866 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:14.866 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.431 [2024-12-09 05:18:06.819762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:15.431 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:15.432 "name": "Existed_Raid", 01:23:15.432 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:15.432 "strip_size_kb": 64, 01:23:15.432 "state": "configuring", 01:23:15.432 "raid_level": "concat", 01:23:15.432 "superblock": false, 01:23:15.432 "num_base_bdevs": 3, 01:23:15.432 "num_base_bdevs_discovered": 2, 01:23:15.432 "num_base_bdevs_operational": 3, 01:23:15.432 "base_bdevs_list": [ 01:23:15.432 { 01:23:15.432 "name": "BaseBdev1", 01:23:15.432 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:15.432 "is_configured": true, 01:23:15.432 "data_offset": 0, 01:23:15.432 "data_size": 65536 01:23:15.432 }, 01:23:15.432 { 01:23:15.432 "name": null, 01:23:15.432 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:15.432 "is_configured": false, 01:23:15.432 "data_offset": 0, 01:23:15.432 "data_size": 65536 01:23:15.432 }, 01:23:15.432 { 01:23:15.432 "name": "BaseBdev3", 01:23:15.432 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:15.432 "is_configured": true, 01:23:15.432 "data_offset": 0, 01:23:15.432 "data_size": 65536 01:23:15.432 } 01:23:15.432 ] 01:23:15.432 }' 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:15.432 05:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.999 [2024-12-09 05:18:07.411996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:15.999 "name": "Existed_Raid", 01:23:15.999 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:15.999 "strip_size_kb": 64, 01:23:15.999 "state": "configuring", 01:23:15.999 "raid_level": "concat", 01:23:15.999 "superblock": false, 01:23:15.999 "num_base_bdevs": 3, 01:23:15.999 "num_base_bdevs_discovered": 1, 01:23:15.999 "num_base_bdevs_operational": 3, 01:23:15.999 "base_bdevs_list": [ 01:23:15.999 { 01:23:15.999 "name": null, 01:23:15.999 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:15.999 "is_configured": false, 01:23:15.999 "data_offset": 0, 01:23:15.999 "data_size": 65536 01:23:15.999 }, 01:23:15.999 { 01:23:15.999 "name": null, 01:23:15.999 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:15.999 "is_configured": false, 01:23:15.999 "data_offset": 0, 01:23:15.999 "data_size": 65536 01:23:15.999 }, 01:23:15.999 { 01:23:15.999 "name": "BaseBdev3", 01:23:15.999 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:15.999 "is_configured": true, 01:23:15.999 "data_offset": 0, 01:23:15.999 "data_size": 65536 01:23:15.999 } 01:23:15.999 ] 01:23:15.999 }' 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:15.999 05:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:16.596 [2024-12-09 05:18:08.152577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:16.596 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:16.868 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:16.868 "name": "Existed_Raid", 01:23:16.868 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:16.868 "strip_size_kb": 64, 01:23:16.868 "state": "configuring", 01:23:16.868 "raid_level": "concat", 01:23:16.868 "superblock": false, 01:23:16.868 "num_base_bdevs": 3, 01:23:16.868 "num_base_bdevs_discovered": 2, 01:23:16.868 "num_base_bdevs_operational": 3, 01:23:16.868 "base_bdevs_list": [ 01:23:16.868 { 01:23:16.868 "name": null, 01:23:16.868 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:16.868 "is_configured": false, 01:23:16.868 "data_offset": 0, 01:23:16.868 "data_size": 65536 01:23:16.868 }, 01:23:16.868 { 01:23:16.868 "name": "BaseBdev2", 01:23:16.868 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:16.868 "is_configured": true, 01:23:16.868 "data_offset": 0, 01:23:16.868 "data_size": 65536 01:23:16.868 }, 01:23:16.868 { 01:23:16.868 "name": "BaseBdev3", 01:23:16.868 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:16.868 "is_configured": true, 01:23:16.868 "data_offset": 0, 01:23:16.868 "data_size": 65536 01:23:16.868 } 01:23:16.868 ] 01:23:16.868 }' 01:23:16.868 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:16.868 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.125 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:23:17.125 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:17.125 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.125 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.125 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2aa03254-9a15-4bc4-8041-ce198e71bd66 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.382 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.382 [2024-12-09 05:18:08.850370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:23:17.382 [2024-12-09 05:18:08.850483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:23:17.382 [2024-12-09 05:18:08.850502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:23:17.382 [2024-12-09 05:18:08.850819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:23:17.382 [2024-12-09 05:18:08.851105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:23:17.382 [2024-12-09 05:18:08.851123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:23:17.382 [2024-12-09 05:18:08.851526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:17.382 NewBaseBdev 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.383 [ 01:23:17.383 { 01:23:17.383 "name": "NewBaseBdev", 01:23:17.383 "aliases": [ 01:23:17.383 "2aa03254-9a15-4bc4-8041-ce198e71bd66" 01:23:17.383 ], 01:23:17.383 "product_name": "Malloc disk", 01:23:17.383 "block_size": 512, 01:23:17.383 "num_blocks": 65536, 01:23:17.383 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:17.383 "assigned_rate_limits": { 01:23:17.383 "rw_ios_per_sec": 0, 01:23:17.383 "rw_mbytes_per_sec": 0, 01:23:17.383 "r_mbytes_per_sec": 0, 01:23:17.383 "w_mbytes_per_sec": 0 01:23:17.383 }, 01:23:17.383 "claimed": true, 01:23:17.383 "claim_type": "exclusive_write", 01:23:17.383 "zoned": false, 01:23:17.383 "supported_io_types": { 01:23:17.383 "read": true, 01:23:17.383 "write": true, 01:23:17.383 "unmap": true, 01:23:17.383 "flush": true, 01:23:17.383 "reset": true, 01:23:17.383 "nvme_admin": false, 01:23:17.383 "nvme_io": false, 01:23:17.383 "nvme_io_md": false, 01:23:17.383 "write_zeroes": true, 01:23:17.383 "zcopy": true, 01:23:17.383 "get_zone_info": false, 01:23:17.383 "zone_management": false, 01:23:17.383 "zone_append": false, 01:23:17.383 "compare": false, 01:23:17.383 "compare_and_write": false, 01:23:17.383 "abort": true, 01:23:17.383 "seek_hole": false, 01:23:17.383 "seek_data": false, 01:23:17.383 "copy": true, 01:23:17.383 "nvme_iov_md": false 01:23:17.383 }, 01:23:17.383 "memory_domains": [ 01:23:17.383 { 01:23:17.383 "dma_device_id": "system", 01:23:17.383 "dma_device_type": 1 01:23:17.383 }, 01:23:17.383 { 01:23:17.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:17.383 "dma_device_type": 2 01:23:17.383 } 01:23:17.383 ], 01:23:17.383 "driver_specific": {} 01:23:17.383 } 01:23:17.383 ] 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:17.383 "name": "Existed_Raid", 01:23:17.383 "uuid": "c4a4628d-16b0-4001-a36b-8dd566e7b95e", 01:23:17.383 "strip_size_kb": 64, 01:23:17.383 "state": "online", 01:23:17.383 "raid_level": "concat", 01:23:17.383 "superblock": false, 01:23:17.383 "num_base_bdevs": 3, 01:23:17.383 "num_base_bdevs_discovered": 3, 01:23:17.383 "num_base_bdevs_operational": 3, 01:23:17.383 "base_bdevs_list": [ 01:23:17.383 { 01:23:17.383 "name": "NewBaseBdev", 01:23:17.383 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:17.383 "is_configured": true, 01:23:17.383 "data_offset": 0, 01:23:17.383 "data_size": 65536 01:23:17.383 }, 01:23:17.383 { 01:23:17.383 "name": "BaseBdev2", 01:23:17.383 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:17.383 "is_configured": true, 01:23:17.383 "data_offset": 0, 01:23:17.383 "data_size": 65536 01:23:17.383 }, 01:23:17.383 { 01:23:17.383 "name": "BaseBdev3", 01:23:17.383 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:17.383 "is_configured": true, 01:23:17.383 "data_offset": 0, 01:23:17.383 "data_size": 65536 01:23:17.383 } 01:23:17.383 ] 01:23:17.383 }' 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:17.383 05:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:17.948 [2024-12-09 05:18:09.411056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:17.948 "name": "Existed_Raid", 01:23:17.948 "aliases": [ 01:23:17.948 "c4a4628d-16b0-4001-a36b-8dd566e7b95e" 01:23:17.948 ], 01:23:17.948 "product_name": "Raid Volume", 01:23:17.948 "block_size": 512, 01:23:17.948 "num_blocks": 196608, 01:23:17.948 "uuid": "c4a4628d-16b0-4001-a36b-8dd566e7b95e", 01:23:17.948 "assigned_rate_limits": { 01:23:17.948 "rw_ios_per_sec": 0, 01:23:17.948 "rw_mbytes_per_sec": 0, 01:23:17.948 "r_mbytes_per_sec": 0, 01:23:17.948 "w_mbytes_per_sec": 0 01:23:17.948 }, 01:23:17.948 "claimed": false, 01:23:17.948 "zoned": false, 01:23:17.948 "supported_io_types": { 01:23:17.948 "read": true, 01:23:17.948 "write": true, 01:23:17.948 "unmap": true, 01:23:17.948 "flush": true, 01:23:17.948 "reset": true, 01:23:17.948 "nvme_admin": false, 01:23:17.948 "nvme_io": false, 01:23:17.948 "nvme_io_md": false, 01:23:17.948 "write_zeroes": true, 01:23:17.948 "zcopy": false, 01:23:17.948 "get_zone_info": false, 01:23:17.948 "zone_management": false, 01:23:17.948 "zone_append": false, 01:23:17.948 "compare": false, 01:23:17.948 "compare_and_write": false, 01:23:17.948 "abort": false, 01:23:17.948 "seek_hole": false, 01:23:17.948 "seek_data": false, 01:23:17.948 "copy": false, 01:23:17.948 "nvme_iov_md": false 01:23:17.948 }, 01:23:17.948 "memory_domains": [ 01:23:17.948 { 01:23:17.948 "dma_device_id": "system", 01:23:17.948 "dma_device_type": 1 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:17.948 "dma_device_type": 2 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "dma_device_id": "system", 01:23:17.948 "dma_device_type": 1 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:17.948 "dma_device_type": 2 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "dma_device_id": "system", 01:23:17.948 "dma_device_type": 1 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:17.948 "dma_device_type": 2 01:23:17.948 } 01:23:17.948 ], 01:23:17.948 "driver_specific": { 01:23:17.948 "raid": { 01:23:17.948 "uuid": "c4a4628d-16b0-4001-a36b-8dd566e7b95e", 01:23:17.948 "strip_size_kb": 64, 01:23:17.948 "state": "online", 01:23:17.948 "raid_level": "concat", 01:23:17.948 "superblock": false, 01:23:17.948 "num_base_bdevs": 3, 01:23:17.948 "num_base_bdevs_discovered": 3, 01:23:17.948 "num_base_bdevs_operational": 3, 01:23:17.948 "base_bdevs_list": [ 01:23:17.948 { 01:23:17.948 "name": "NewBaseBdev", 01:23:17.948 "uuid": "2aa03254-9a15-4bc4-8041-ce198e71bd66", 01:23:17.948 "is_configured": true, 01:23:17.948 "data_offset": 0, 01:23:17.948 "data_size": 65536 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "name": "BaseBdev2", 01:23:17.948 "uuid": "39f99f65-c132-465f-bd24-55ebe59298a0", 01:23:17.948 "is_configured": true, 01:23:17.948 "data_offset": 0, 01:23:17.948 "data_size": 65536 01:23:17.948 }, 01:23:17.948 { 01:23:17.948 "name": "BaseBdev3", 01:23:17.948 "uuid": "fe349fd3-03a4-4234-8c4b-cd2909395494", 01:23:17.948 "is_configured": true, 01:23:17.948 "data_offset": 0, 01:23:17.948 "data_size": 65536 01:23:17.948 } 01:23:17.948 ] 01:23:17.948 } 01:23:17.948 } 01:23:17.948 }' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:23:17.948 BaseBdev2 01:23:17.948 BaseBdev3' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:17.948 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:18.206 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:18.207 [2024-12-09 05:18:09.718689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:18.207 [2024-12-09 05:18:09.718755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:18.207 [2024-12-09 05:18:09.718888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:18.207 [2024-12-09 05:18:09.719007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:18.207 [2024-12-09 05:18:09.719030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65523 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65523 ']' 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65523 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65523 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:18.207 killing process with pid 65523 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65523' 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65523 01:23:18.207 [2024-12-09 05:18:09.760004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:18.207 05:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65523 01:23:18.465 [2024-12-09 05:18:10.065897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:23:19.841 01:23:19.841 real 0m12.419s 01:23:19.841 user 0m20.384s 01:23:19.841 sys 0m1.735s 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:19.841 ************************************ 01:23:19.841 END TEST raid_state_function_test 01:23:19.841 ************************************ 01:23:19.841 05:18:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 01:23:19.841 05:18:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:23:19.841 05:18:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:19.841 05:18:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:19.841 ************************************ 01:23:19.841 START TEST raid_state_function_test_sb 01:23:19.841 ************************************ 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66168 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:23:19.841 Process raid pid: 66168 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66168' 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66168 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66168 ']' 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:19.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:19.841 05:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:20.099 [2024-12-09 05:18:11.539135] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:20.099 [2024-12-09 05:18:11.539316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:20.356 [2024-12-09 05:18:11.732240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:20.356 [2024-12-09 05:18:11.913848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:20.613 [2024-12-09 05:18:12.161171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:20.613 [2024-12-09 05:18:12.161266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.178 [2024-12-09 05:18:12.542468] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:21.178 [2024-12-09 05:18:12.542566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:21.178 [2024-12-09 05:18:12.542586] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:21.178 [2024-12-09 05:18:12.542603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:21.178 [2024-12-09 05:18:12.542613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:21.178 [2024-12-09 05:18:12.542629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.178 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:21.178 "name": "Existed_Raid", 01:23:21.179 "uuid": "78e350c0-bc9c-4a87-be95-5340c42157ba", 01:23:21.179 "strip_size_kb": 64, 01:23:21.179 "state": "configuring", 01:23:21.179 "raid_level": "concat", 01:23:21.179 "superblock": true, 01:23:21.179 "num_base_bdevs": 3, 01:23:21.179 "num_base_bdevs_discovered": 0, 01:23:21.179 "num_base_bdevs_operational": 3, 01:23:21.179 "base_bdevs_list": [ 01:23:21.179 { 01:23:21.179 "name": "BaseBdev1", 01:23:21.179 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:21.179 "is_configured": false, 01:23:21.179 "data_offset": 0, 01:23:21.179 "data_size": 0 01:23:21.179 }, 01:23:21.179 { 01:23:21.179 "name": "BaseBdev2", 01:23:21.179 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:21.179 "is_configured": false, 01:23:21.179 "data_offset": 0, 01:23:21.179 "data_size": 0 01:23:21.179 }, 01:23:21.179 { 01:23:21.179 "name": "BaseBdev3", 01:23:21.179 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:21.179 "is_configured": false, 01:23:21.179 "data_offset": 0, 01:23:21.179 "data_size": 0 01:23:21.179 } 01:23:21.179 ] 01:23:21.179 }' 01:23:21.179 05:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:21.179 05:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.745 [2024-12-09 05:18:13.062518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:21.745 [2024-12-09 05:18:13.062592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.745 [2024-12-09 05:18:13.070472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:21.745 [2024-12-09 05:18:13.070536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:21.745 [2024-12-09 05:18:13.070552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:21.745 [2024-12-09 05:18:13.070569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:21.745 [2024-12-09 05:18:13.070579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:21.745 [2024-12-09 05:18:13.070595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.745 [2024-12-09 05:18:13.123468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:21.745 BaseBdev1 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.745 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.745 [ 01:23:21.745 { 01:23:21.745 "name": "BaseBdev1", 01:23:21.745 "aliases": [ 01:23:21.745 "00fd0677-e8cb-4fa3-91ff-0f77d66a057b" 01:23:21.745 ], 01:23:21.745 "product_name": "Malloc disk", 01:23:21.745 "block_size": 512, 01:23:21.745 "num_blocks": 65536, 01:23:21.745 "uuid": "00fd0677-e8cb-4fa3-91ff-0f77d66a057b", 01:23:21.745 "assigned_rate_limits": { 01:23:21.746 "rw_ios_per_sec": 0, 01:23:21.746 "rw_mbytes_per_sec": 0, 01:23:21.746 "r_mbytes_per_sec": 0, 01:23:21.746 "w_mbytes_per_sec": 0 01:23:21.746 }, 01:23:21.746 "claimed": true, 01:23:21.746 "claim_type": "exclusive_write", 01:23:21.746 "zoned": false, 01:23:21.746 "supported_io_types": { 01:23:21.746 "read": true, 01:23:21.746 "write": true, 01:23:21.746 "unmap": true, 01:23:21.746 "flush": true, 01:23:21.746 "reset": true, 01:23:21.746 "nvme_admin": false, 01:23:21.746 "nvme_io": false, 01:23:21.746 "nvme_io_md": false, 01:23:21.746 "write_zeroes": true, 01:23:21.746 "zcopy": true, 01:23:21.746 "get_zone_info": false, 01:23:21.746 "zone_management": false, 01:23:21.746 "zone_append": false, 01:23:21.746 "compare": false, 01:23:21.746 "compare_and_write": false, 01:23:21.746 "abort": true, 01:23:21.746 "seek_hole": false, 01:23:21.746 "seek_data": false, 01:23:21.746 "copy": true, 01:23:21.746 "nvme_iov_md": false 01:23:21.746 }, 01:23:21.746 "memory_domains": [ 01:23:21.746 { 01:23:21.746 "dma_device_id": "system", 01:23:21.746 "dma_device_type": 1 01:23:21.746 }, 01:23:21.746 { 01:23:21.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:21.746 "dma_device_type": 2 01:23:21.746 } 01:23:21.746 ], 01:23:21.746 "driver_specific": {} 01:23:21.746 } 01:23:21.746 ] 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:21.746 "name": "Existed_Raid", 01:23:21.746 "uuid": "3880b8b5-440d-493d-abef-5c866412980c", 01:23:21.746 "strip_size_kb": 64, 01:23:21.746 "state": "configuring", 01:23:21.746 "raid_level": "concat", 01:23:21.746 "superblock": true, 01:23:21.746 "num_base_bdevs": 3, 01:23:21.746 "num_base_bdevs_discovered": 1, 01:23:21.746 "num_base_bdevs_operational": 3, 01:23:21.746 "base_bdevs_list": [ 01:23:21.746 { 01:23:21.746 "name": "BaseBdev1", 01:23:21.746 "uuid": "00fd0677-e8cb-4fa3-91ff-0f77d66a057b", 01:23:21.746 "is_configured": true, 01:23:21.746 "data_offset": 2048, 01:23:21.746 "data_size": 63488 01:23:21.746 }, 01:23:21.746 { 01:23:21.746 "name": "BaseBdev2", 01:23:21.746 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:21.746 "is_configured": false, 01:23:21.746 "data_offset": 0, 01:23:21.746 "data_size": 0 01:23:21.746 }, 01:23:21.746 { 01:23:21.746 "name": "BaseBdev3", 01:23:21.746 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:21.746 "is_configured": false, 01:23:21.746 "data_offset": 0, 01:23:21.746 "data_size": 0 01:23:21.746 } 01:23:21.746 ] 01:23:21.746 }' 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:21.746 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.345 [2024-12-09 05:18:13.707725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:22.345 [2024-12-09 05:18:13.707833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.345 [2024-12-09 05:18:13.715787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:22.345 [2024-12-09 05:18:13.718546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:22.345 [2024-12-09 05:18:13.718599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:22.345 [2024-12-09 05:18:13.718617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:22.345 [2024-12-09 05:18:13.718632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:22.345 "name": "Existed_Raid", 01:23:22.345 "uuid": "84a94b1d-63c2-46b0-bf89-fa29cd98bd62", 01:23:22.345 "strip_size_kb": 64, 01:23:22.345 "state": "configuring", 01:23:22.345 "raid_level": "concat", 01:23:22.345 "superblock": true, 01:23:22.345 "num_base_bdevs": 3, 01:23:22.345 "num_base_bdevs_discovered": 1, 01:23:22.345 "num_base_bdevs_operational": 3, 01:23:22.345 "base_bdevs_list": [ 01:23:22.345 { 01:23:22.345 "name": "BaseBdev1", 01:23:22.345 "uuid": "00fd0677-e8cb-4fa3-91ff-0f77d66a057b", 01:23:22.345 "is_configured": true, 01:23:22.345 "data_offset": 2048, 01:23:22.345 "data_size": 63488 01:23:22.345 }, 01:23:22.345 { 01:23:22.345 "name": "BaseBdev2", 01:23:22.345 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:22.345 "is_configured": false, 01:23:22.345 "data_offset": 0, 01:23:22.345 "data_size": 0 01:23:22.345 }, 01:23:22.345 { 01:23:22.345 "name": "BaseBdev3", 01:23:22.345 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:22.345 "is_configured": false, 01:23:22.345 "data_offset": 0, 01:23:22.345 "data_size": 0 01:23:22.345 } 01:23:22.345 ] 01:23:22.345 }' 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:22.345 05:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.624 [2024-12-09 05:18:14.216840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:22.624 BaseBdev2 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.624 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.625 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:23:22.625 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.625 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.625 [ 01:23:22.625 { 01:23:22.625 "name": "BaseBdev2", 01:23:22.625 "aliases": [ 01:23:22.625 "5d651971-eb25-4cb5-8439-cbf360a78fa2" 01:23:22.883 ], 01:23:22.883 "product_name": "Malloc disk", 01:23:22.883 "block_size": 512, 01:23:22.883 "num_blocks": 65536, 01:23:22.883 "uuid": "5d651971-eb25-4cb5-8439-cbf360a78fa2", 01:23:22.883 "assigned_rate_limits": { 01:23:22.883 "rw_ios_per_sec": 0, 01:23:22.883 "rw_mbytes_per_sec": 0, 01:23:22.883 "r_mbytes_per_sec": 0, 01:23:22.883 "w_mbytes_per_sec": 0 01:23:22.883 }, 01:23:22.883 "claimed": true, 01:23:22.883 "claim_type": "exclusive_write", 01:23:22.883 "zoned": false, 01:23:22.883 "supported_io_types": { 01:23:22.883 "read": true, 01:23:22.883 "write": true, 01:23:22.883 "unmap": true, 01:23:22.883 "flush": true, 01:23:22.883 "reset": true, 01:23:22.883 "nvme_admin": false, 01:23:22.883 "nvme_io": false, 01:23:22.883 "nvme_io_md": false, 01:23:22.883 "write_zeroes": true, 01:23:22.883 "zcopy": true, 01:23:22.883 "get_zone_info": false, 01:23:22.883 "zone_management": false, 01:23:22.883 "zone_append": false, 01:23:22.883 "compare": false, 01:23:22.883 "compare_and_write": false, 01:23:22.883 "abort": true, 01:23:22.883 "seek_hole": false, 01:23:22.883 "seek_data": false, 01:23:22.883 "copy": true, 01:23:22.883 "nvme_iov_md": false 01:23:22.883 }, 01:23:22.883 "memory_domains": [ 01:23:22.883 { 01:23:22.883 "dma_device_id": "system", 01:23:22.883 "dma_device_type": 1 01:23:22.883 }, 01:23:22.883 { 01:23:22.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:22.883 "dma_device_type": 2 01:23:22.883 } 01:23:22.883 ], 01:23:22.883 "driver_specific": {} 01:23:22.883 } 01:23:22.883 ] 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:22.883 "name": "Existed_Raid", 01:23:22.883 "uuid": "84a94b1d-63c2-46b0-bf89-fa29cd98bd62", 01:23:22.883 "strip_size_kb": 64, 01:23:22.883 "state": "configuring", 01:23:22.883 "raid_level": "concat", 01:23:22.883 "superblock": true, 01:23:22.883 "num_base_bdevs": 3, 01:23:22.883 "num_base_bdevs_discovered": 2, 01:23:22.883 "num_base_bdevs_operational": 3, 01:23:22.883 "base_bdevs_list": [ 01:23:22.883 { 01:23:22.883 "name": "BaseBdev1", 01:23:22.883 "uuid": "00fd0677-e8cb-4fa3-91ff-0f77d66a057b", 01:23:22.883 "is_configured": true, 01:23:22.883 "data_offset": 2048, 01:23:22.883 "data_size": 63488 01:23:22.883 }, 01:23:22.883 { 01:23:22.883 "name": "BaseBdev2", 01:23:22.883 "uuid": "5d651971-eb25-4cb5-8439-cbf360a78fa2", 01:23:22.883 "is_configured": true, 01:23:22.883 "data_offset": 2048, 01:23:22.883 "data_size": 63488 01:23:22.883 }, 01:23:22.883 { 01:23:22.883 "name": "BaseBdev3", 01:23:22.883 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:22.883 "is_configured": false, 01:23:22.883 "data_offset": 0, 01:23:22.883 "data_size": 0 01:23:22.883 } 01:23:22.883 ] 01:23:22.883 }' 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:22.883 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:23.448 [2024-12-09 05:18:14.822203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:23.448 [2024-12-09 05:18:14.822939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:23:23.448 [2024-12-09 05:18:14.822978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:23.448 BaseBdev3 01:23:23.448 [2024-12-09 05:18:14.823393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:23:23.448 [2024-12-09 05:18:14.823651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:23:23.448 [2024-12-09 05:18:14.823675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:23:23.448 [2024-12-09 05:18:14.823858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:23.448 [ 01:23:23.448 { 01:23:23.448 "name": "BaseBdev3", 01:23:23.448 "aliases": [ 01:23:23.448 "ff6b57bc-50e0-4113-8296-a01213843676" 01:23:23.448 ], 01:23:23.448 "product_name": "Malloc disk", 01:23:23.448 "block_size": 512, 01:23:23.448 "num_blocks": 65536, 01:23:23.448 "uuid": "ff6b57bc-50e0-4113-8296-a01213843676", 01:23:23.448 "assigned_rate_limits": { 01:23:23.448 "rw_ios_per_sec": 0, 01:23:23.448 "rw_mbytes_per_sec": 0, 01:23:23.448 "r_mbytes_per_sec": 0, 01:23:23.448 "w_mbytes_per_sec": 0 01:23:23.448 }, 01:23:23.448 "claimed": true, 01:23:23.448 "claim_type": "exclusive_write", 01:23:23.448 "zoned": false, 01:23:23.448 "supported_io_types": { 01:23:23.448 "read": true, 01:23:23.448 "write": true, 01:23:23.448 "unmap": true, 01:23:23.448 "flush": true, 01:23:23.448 "reset": true, 01:23:23.448 "nvme_admin": false, 01:23:23.448 "nvme_io": false, 01:23:23.448 "nvme_io_md": false, 01:23:23.448 "write_zeroes": true, 01:23:23.448 "zcopy": true, 01:23:23.448 "get_zone_info": false, 01:23:23.448 "zone_management": false, 01:23:23.448 "zone_append": false, 01:23:23.448 "compare": false, 01:23:23.448 "compare_and_write": false, 01:23:23.448 "abort": true, 01:23:23.448 "seek_hole": false, 01:23:23.448 "seek_data": false, 01:23:23.448 "copy": true, 01:23:23.448 "nvme_iov_md": false 01:23:23.448 }, 01:23:23.448 "memory_domains": [ 01:23:23.448 { 01:23:23.448 "dma_device_id": "system", 01:23:23.448 "dma_device_type": 1 01:23:23.448 }, 01:23:23.448 { 01:23:23.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:23.448 "dma_device_type": 2 01:23:23.448 } 01:23:23.448 ], 01:23:23.448 "driver_specific": {} 01:23:23.448 } 01:23:23.448 ] 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:23.448 "name": "Existed_Raid", 01:23:23.448 "uuid": "84a94b1d-63c2-46b0-bf89-fa29cd98bd62", 01:23:23.448 "strip_size_kb": 64, 01:23:23.448 "state": "online", 01:23:23.448 "raid_level": "concat", 01:23:23.448 "superblock": true, 01:23:23.448 "num_base_bdevs": 3, 01:23:23.448 "num_base_bdevs_discovered": 3, 01:23:23.448 "num_base_bdevs_operational": 3, 01:23:23.448 "base_bdevs_list": [ 01:23:23.448 { 01:23:23.448 "name": "BaseBdev1", 01:23:23.448 "uuid": "00fd0677-e8cb-4fa3-91ff-0f77d66a057b", 01:23:23.448 "is_configured": true, 01:23:23.448 "data_offset": 2048, 01:23:23.448 "data_size": 63488 01:23:23.448 }, 01:23:23.448 { 01:23:23.448 "name": "BaseBdev2", 01:23:23.448 "uuid": "5d651971-eb25-4cb5-8439-cbf360a78fa2", 01:23:23.448 "is_configured": true, 01:23:23.448 "data_offset": 2048, 01:23:23.448 "data_size": 63488 01:23:23.448 }, 01:23:23.448 { 01:23:23.448 "name": "BaseBdev3", 01:23:23.448 "uuid": "ff6b57bc-50e0-4113-8296-a01213843676", 01:23:23.448 "is_configured": true, 01:23:23.448 "data_offset": 2048, 01:23:23.448 "data_size": 63488 01:23:23.448 } 01:23:23.448 ] 01:23:23.448 }' 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:23.448 05:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:24.011 [2024-12-09 05:18:15.386837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.011 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:24.011 "name": "Existed_Raid", 01:23:24.011 "aliases": [ 01:23:24.011 "84a94b1d-63c2-46b0-bf89-fa29cd98bd62" 01:23:24.011 ], 01:23:24.011 "product_name": "Raid Volume", 01:23:24.011 "block_size": 512, 01:23:24.011 "num_blocks": 190464, 01:23:24.011 "uuid": "84a94b1d-63c2-46b0-bf89-fa29cd98bd62", 01:23:24.011 "assigned_rate_limits": { 01:23:24.011 "rw_ios_per_sec": 0, 01:23:24.011 "rw_mbytes_per_sec": 0, 01:23:24.011 "r_mbytes_per_sec": 0, 01:23:24.011 "w_mbytes_per_sec": 0 01:23:24.011 }, 01:23:24.011 "claimed": false, 01:23:24.011 "zoned": false, 01:23:24.011 "supported_io_types": { 01:23:24.011 "read": true, 01:23:24.011 "write": true, 01:23:24.011 "unmap": true, 01:23:24.011 "flush": true, 01:23:24.011 "reset": true, 01:23:24.011 "nvme_admin": false, 01:23:24.011 "nvme_io": false, 01:23:24.011 "nvme_io_md": false, 01:23:24.011 "write_zeroes": true, 01:23:24.011 "zcopy": false, 01:23:24.011 "get_zone_info": false, 01:23:24.011 "zone_management": false, 01:23:24.011 "zone_append": false, 01:23:24.011 "compare": false, 01:23:24.011 "compare_and_write": false, 01:23:24.011 "abort": false, 01:23:24.011 "seek_hole": false, 01:23:24.011 "seek_data": false, 01:23:24.011 "copy": false, 01:23:24.011 "nvme_iov_md": false 01:23:24.011 }, 01:23:24.011 "memory_domains": [ 01:23:24.011 { 01:23:24.011 "dma_device_id": "system", 01:23:24.011 "dma_device_type": 1 01:23:24.011 }, 01:23:24.011 { 01:23:24.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:24.011 "dma_device_type": 2 01:23:24.011 }, 01:23:24.011 { 01:23:24.011 "dma_device_id": "system", 01:23:24.011 "dma_device_type": 1 01:23:24.011 }, 01:23:24.011 { 01:23:24.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:24.011 "dma_device_type": 2 01:23:24.011 }, 01:23:24.011 { 01:23:24.011 "dma_device_id": "system", 01:23:24.011 "dma_device_type": 1 01:23:24.011 }, 01:23:24.011 { 01:23:24.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:24.011 "dma_device_type": 2 01:23:24.011 } 01:23:24.011 ], 01:23:24.011 "driver_specific": { 01:23:24.011 "raid": { 01:23:24.011 "uuid": "84a94b1d-63c2-46b0-bf89-fa29cd98bd62", 01:23:24.011 "strip_size_kb": 64, 01:23:24.011 "state": "online", 01:23:24.011 "raid_level": "concat", 01:23:24.011 "superblock": true, 01:23:24.011 "num_base_bdevs": 3, 01:23:24.011 "num_base_bdevs_discovered": 3, 01:23:24.011 "num_base_bdevs_operational": 3, 01:23:24.011 "base_bdevs_list": [ 01:23:24.011 { 01:23:24.011 "name": "BaseBdev1", 01:23:24.011 "uuid": "00fd0677-e8cb-4fa3-91ff-0f77d66a057b", 01:23:24.011 "is_configured": true, 01:23:24.011 "data_offset": 2048, 01:23:24.011 "data_size": 63488 01:23:24.012 }, 01:23:24.012 { 01:23:24.012 "name": "BaseBdev2", 01:23:24.012 "uuid": "5d651971-eb25-4cb5-8439-cbf360a78fa2", 01:23:24.012 "is_configured": true, 01:23:24.012 "data_offset": 2048, 01:23:24.012 "data_size": 63488 01:23:24.012 }, 01:23:24.012 { 01:23:24.012 "name": "BaseBdev3", 01:23:24.012 "uuid": "ff6b57bc-50e0-4113-8296-a01213843676", 01:23:24.012 "is_configured": true, 01:23:24.012 "data_offset": 2048, 01:23:24.012 "data_size": 63488 01:23:24.012 } 01:23:24.012 ] 01:23:24.012 } 01:23:24.012 } 01:23:24.012 }' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:23:24.012 BaseBdev2 01:23:24.012 BaseBdev3' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.012 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:24.269 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.270 [2024-12-09 05:18:15.694658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:23:24.270 [2024-12-09 05:18:15.694698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:24.270 [2024-12-09 05:18:15.694830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:24.270 "name": "Existed_Raid", 01:23:24.270 "uuid": "84a94b1d-63c2-46b0-bf89-fa29cd98bd62", 01:23:24.270 "strip_size_kb": 64, 01:23:24.270 "state": "offline", 01:23:24.270 "raid_level": "concat", 01:23:24.270 "superblock": true, 01:23:24.270 "num_base_bdevs": 3, 01:23:24.270 "num_base_bdevs_discovered": 2, 01:23:24.270 "num_base_bdevs_operational": 2, 01:23:24.270 "base_bdevs_list": [ 01:23:24.270 { 01:23:24.270 "name": null, 01:23:24.270 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:24.270 "is_configured": false, 01:23:24.270 "data_offset": 0, 01:23:24.270 "data_size": 63488 01:23:24.270 }, 01:23:24.270 { 01:23:24.270 "name": "BaseBdev2", 01:23:24.270 "uuid": "5d651971-eb25-4cb5-8439-cbf360a78fa2", 01:23:24.270 "is_configured": true, 01:23:24.270 "data_offset": 2048, 01:23:24.270 "data_size": 63488 01:23:24.270 }, 01:23:24.270 { 01:23:24.270 "name": "BaseBdev3", 01:23:24.270 "uuid": "ff6b57bc-50e0-4113-8296-a01213843676", 01:23:24.270 "is_configured": true, 01:23:24.270 "data_offset": 2048, 01:23:24.270 "data_size": 63488 01:23:24.270 } 01:23:24.270 ] 01:23:24.270 }' 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:24.270 05:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.834 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:24.834 [2024-12-09 05:18:16.375764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.093 [2024-12-09 05:18:16.520607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:23:25.093 [2024-12-09 05:18:16.520900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:25.093 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.094 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.353 BaseBdev2 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.353 [ 01:23:25.353 { 01:23:25.353 "name": "BaseBdev2", 01:23:25.353 "aliases": [ 01:23:25.353 "be23ce03-cae4-4cf8-8540-d5e3aabe144e" 01:23:25.353 ], 01:23:25.353 "product_name": "Malloc disk", 01:23:25.353 "block_size": 512, 01:23:25.353 "num_blocks": 65536, 01:23:25.353 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:25.353 "assigned_rate_limits": { 01:23:25.353 "rw_ios_per_sec": 0, 01:23:25.353 "rw_mbytes_per_sec": 0, 01:23:25.353 "r_mbytes_per_sec": 0, 01:23:25.353 "w_mbytes_per_sec": 0 01:23:25.353 }, 01:23:25.353 "claimed": false, 01:23:25.353 "zoned": false, 01:23:25.353 "supported_io_types": { 01:23:25.353 "read": true, 01:23:25.353 "write": true, 01:23:25.353 "unmap": true, 01:23:25.353 "flush": true, 01:23:25.353 "reset": true, 01:23:25.353 "nvme_admin": false, 01:23:25.353 "nvme_io": false, 01:23:25.353 "nvme_io_md": false, 01:23:25.353 "write_zeroes": true, 01:23:25.353 "zcopy": true, 01:23:25.353 "get_zone_info": false, 01:23:25.353 "zone_management": false, 01:23:25.353 "zone_append": false, 01:23:25.353 "compare": false, 01:23:25.353 "compare_and_write": false, 01:23:25.353 "abort": true, 01:23:25.353 "seek_hole": false, 01:23:25.353 "seek_data": false, 01:23:25.353 "copy": true, 01:23:25.353 "nvme_iov_md": false 01:23:25.353 }, 01:23:25.353 "memory_domains": [ 01:23:25.353 { 01:23:25.353 "dma_device_id": "system", 01:23:25.353 "dma_device_type": 1 01:23:25.353 }, 01:23:25.353 { 01:23:25.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:25.353 "dma_device_type": 2 01:23:25.353 } 01:23:25.353 ], 01:23:25.353 "driver_specific": {} 01:23:25.353 } 01:23:25.353 ] 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.353 BaseBdev3 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:23:25.353 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.354 [ 01:23:25.354 { 01:23:25.354 "name": "BaseBdev3", 01:23:25.354 "aliases": [ 01:23:25.354 "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a" 01:23:25.354 ], 01:23:25.354 "product_name": "Malloc disk", 01:23:25.354 "block_size": 512, 01:23:25.354 "num_blocks": 65536, 01:23:25.354 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:25.354 "assigned_rate_limits": { 01:23:25.354 "rw_ios_per_sec": 0, 01:23:25.354 "rw_mbytes_per_sec": 0, 01:23:25.354 "r_mbytes_per_sec": 0, 01:23:25.354 "w_mbytes_per_sec": 0 01:23:25.354 }, 01:23:25.354 "claimed": false, 01:23:25.354 "zoned": false, 01:23:25.354 "supported_io_types": { 01:23:25.354 "read": true, 01:23:25.354 "write": true, 01:23:25.354 "unmap": true, 01:23:25.354 "flush": true, 01:23:25.354 "reset": true, 01:23:25.354 "nvme_admin": false, 01:23:25.354 "nvme_io": false, 01:23:25.354 "nvme_io_md": false, 01:23:25.354 "write_zeroes": true, 01:23:25.354 "zcopy": true, 01:23:25.354 "get_zone_info": false, 01:23:25.354 "zone_management": false, 01:23:25.354 "zone_append": false, 01:23:25.354 "compare": false, 01:23:25.354 "compare_and_write": false, 01:23:25.354 "abort": true, 01:23:25.354 "seek_hole": false, 01:23:25.354 "seek_data": false, 01:23:25.354 "copy": true, 01:23:25.354 "nvme_iov_md": false 01:23:25.354 }, 01:23:25.354 "memory_domains": [ 01:23:25.354 { 01:23:25.354 "dma_device_id": "system", 01:23:25.354 "dma_device_type": 1 01:23:25.354 }, 01:23:25.354 { 01:23:25.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:25.354 "dma_device_type": 2 01:23:25.354 } 01:23:25.354 ], 01:23:25.354 "driver_specific": {} 01:23:25.354 } 01:23:25.354 ] 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.354 [2024-12-09 05:18:16.819709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:25.354 [2024-12-09 05:18:16.819968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:25.354 [2024-12-09 05:18:16.820108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:25.354 [2024-12-09 05:18:16.822875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:25.354 "name": "Existed_Raid", 01:23:25.354 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:25.354 "strip_size_kb": 64, 01:23:25.354 "state": "configuring", 01:23:25.354 "raid_level": "concat", 01:23:25.354 "superblock": true, 01:23:25.354 "num_base_bdevs": 3, 01:23:25.354 "num_base_bdevs_discovered": 2, 01:23:25.354 "num_base_bdevs_operational": 3, 01:23:25.354 "base_bdevs_list": [ 01:23:25.354 { 01:23:25.354 "name": "BaseBdev1", 01:23:25.354 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:25.354 "is_configured": false, 01:23:25.354 "data_offset": 0, 01:23:25.354 "data_size": 0 01:23:25.354 }, 01:23:25.354 { 01:23:25.354 "name": "BaseBdev2", 01:23:25.354 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:25.354 "is_configured": true, 01:23:25.354 "data_offset": 2048, 01:23:25.354 "data_size": 63488 01:23:25.354 }, 01:23:25.354 { 01:23:25.354 "name": "BaseBdev3", 01:23:25.354 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:25.354 "is_configured": true, 01:23:25.354 "data_offset": 2048, 01:23:25.354 "data_size": 63488 01:23:25.354 } 01:23:25.354 ] 01:23:25.354 }' 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:25.354 05:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.936 [2024-12-09 05:18:17.331894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:25.936 "name": "Existed_Raid", 01:23:25.936 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:25.936 "strip_size_kb": 64, 01:23:25.936 "state": "configuring", 01:23:25.936 "raid_level": "concat", 01:23:25.936 "superblock": true, 01:23:25.936 "num_base_bdevs": 3, 01:23:25.936 "num_base_bdevs_discovered": 1, 01:23:25.936 "num_base_bdevs_operational": 3, 01:23:25.936 "base_bdevs_list": [ 01:23:25.936 { 01:23:25.936 "name": "BaseBdev1", 01:23:25.936 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:25.936 "is_configured": false, 01:23:25.936 "data_offset": 0, 01:23:25.936 "data_size": 0 01:23:25.936 }, 01:23:25.936 { 01:23:25.936 "name": null, 01:23:25.936 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:25.936 "is_configured": false, 01:23:25.936 "data_offset": 0, 01:23:25.936 "data_size": 63488 01:23:25.936 }, 01:23:25.936 { 01:23:25.936 "name": "BaseBdev3", 01:23:25.936 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:25.936 "is_configured": true, 01:23:25.936 "data_offset": 2048, 01:23:25.936 "data_size": 63488 01:23:25.936 } 01:23:25.936 ] 01:23:25.936 }' 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:25.936 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:26.503 [2024-12-09 05:18:17.941456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:26.503 BaseBdev1 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:26.503 [ 01:23:26.503 { 01:23:26.503 "name": "BaseBdev1", 01:23:26.503 "aliases": [ 01:23:26.503 "da552f12-6e07-4847-a07c-cd6b6f5419f9" 01:23:26.503 ], 01:23:26.503 "product_name": "Malloc disk", 01:23:26.503 "block_size": 512, 01:23:26.503 "num_blocks": 65536, 01:23:26.503 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:26.503 "assigned_rate_limits": { 01:23:26.503 "rw_ios_per_sec": 0, 01:23:26.503 "rw_mbytes_per_sec": 0, 01:23:26.503 "r_mbytes_per_sec": 0, 01:23:26.503 "w_mbytes_per_sec": 0 01:23:26.503 }, 01:23:26.503 "claimed": true, 01:23:26.503 "claim_type": "exclusive_write", 01:23:26.503 "zoned": false, 01:23:26.503 "supported_io_types": { 01:23:26.503 "read": true, 01:23:26.503 "write": true, 01:23:26.503 "unmap": true, 01:23:26.503 "flush": true, 01:23:26.503 "reset": true, 01:23:26.503 "nvme_admin": false, 01:23:26.503 "nvme_io": false, 01:23:26.503 "nvme_io_md": false, 01:23:26.503 "write_zeroes": true, 01:23:26.503 "zcopy": true, 01:23:26.503 "get_zone_info": false, 01:23:26.503 "zone_management": false, 01:23:26.503 "zone_append": false, 01:23:26.503 "compare": false, 01:23:26.503 "compare_and_write": false, 01:23:26.503 "abort": true, 01:23:26.503 "seek_hole": false, 01:23:26.503 "seek_data": false, 01:23:26.503 "copy": true, 01:23:26.503 "nvme_iov_md": false 01:23:26.503 }, 01:23:26.503 "memory_domains": [ 01:23:26.503 { 01:23:26.503 "dma_device_id": "system", 01:23:26.503 "dma_device_type": 1 01:23:26.503 }, 01:23:26.503 { 01:23:26.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:26.503 "dma_device_type": 2 01:23:26.503 } 01:23:26.503 ], 01:23:26.503 "driver_specific": {} 01:23:26.503 } 01:23:26.503 ] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:26.503 05:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:26.503 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:26.503 "name": "Existed_Raid", 01:23:26.503 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:26.503 "strip_size_kb": 64, 01:23:26.503 "state": "configuring", 01:23:26.503 "raid_level": "concat", 01:23:26.503 "superblock": true, 01:23:26.503 "num_base_bdevs": 3, 01:23:26.503 "num_base_bdevs_discovered": 2, 01:23:26.503 "num_base_bdevs_operational": 3, 01:23:26.503 "base_bdevs_list": [ 01:23:26.503 { 01:23:26.503 "name": "BaseBdev1", 01:23:26.503 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:26.503 "is_configured": true, 01:23:26.503 "data_offset": 2048, 01:23:26.503 "data_size": 63488 01:23:26.503 }, 01:23:26.503 { 01:23:26.503 "name": null, 01:23:26.503 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:26.503 "is_configured": false, 01:23:26.503 "data_offset": 0, 01:23:26.503 "data_size": 63488 01:23:26.503 }, 01:23:26.503 { 01:23:26.503 "name": "BaseBdev3", 01:23:26.503 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:26.503 "is_configured": true, 01:23:26.503 "data_offset": 2048, 01:23:26.503 "data_size": 63488 01:23:26.503 } 01:23:26.503 ] 01:23:26.503 }' 01:23:26.503 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:26.503 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.069 [2024-12-09 05:18:18.529755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:27.069 "name": "Existed_Raid", 01:23:27.069 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:27.069 "strip_size_kb": 64, 01:23:27.069 "state": "configuring", 01:23:27.069 "raid_level": "concat", 01:23:27.069 "superblock": true, 01:23:27.069 "num_base_bdevs": 3, 01:23:27.069 "num_base_bdevs_discovered": 1, 01:23:27.069 "num_base_bdevs_operational": 3, 01:23:27.069 "base_bdevs_list": [ 01:23:27.069 { 01:23:27.069 "name": "BaseBdev1", 01:23:27.069 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:27.069 "is_configured": true, 01:23:27.069 "data_offset": 2048, 01:23:27.069 "data_size": 63488 01:23:27.069 }, 01:23:27.069 { 01:23:27.069 "name": null, 01:23:27.069 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:27.069 "is_configured": false, 01:23:27.069 "data_offset": 0, 01:23:27.069 "data_size": 63488 01:23:27.069 }, 01:23:27.069 { 01:23:27.069 "name": null, 01:23:27.069 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:27.069 "is_configured": false, 01:23:27.069 "data_offset": 0, 01:23:27.069 "data_size": 63488 01:23:27.069 } 01:23:27.069 ] 01:23:27.069 }' 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:27.069 05:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.635 [2024-12-09 05:18:19.105957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.635 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:27.635 "name": "Existed_Raid", 01:23:27.635 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:27.635 "strip_size_kb": 64, 01:23:27.635 "state": "configuring", 01:23:27.635 "raid_level": "concat", 01:23:27.635 "superblock": true, 01:23:27.635 "num_base_bdevs": 3, 01:23:27.635 "num_base_bdevs_discovered": 2, 01:23:27.635 "num_base_bdevs_operational": 3, 01:23:27.635 "base_bdevs_list": [ 01:23:27.635 { 01:23:27.635 "name": "BaseBdev1", 01:23:27.635 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:27.635 "is_configured": true, 01:23:27.635 "data_offset": 2048, 01:23:27.635 "data_size": 63488 01:23:27.635 }, 01:23:27.635 { 01:23:27.635 "name": null, 01:23:27.635 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:27.635 "is_configured": false, 01:23:27.635 "data_offset": 0, 01:23:27.635 "data_size": 63488 01:23:27.635 }, 01:23:27.635 { 01:23:27.635 "name": "BaseBdev3", 01:23:27.635 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:27.635 "is_configured": true, 01:23:27.635 "data_offset": 2048, 01:23:27.635 "data_size": 63488 01:23:27.636 } 01:23:27.636 ] 01:23:27.636 }' 01:23:27.636 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:27.636 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.201 [2024-12-09 05:18:19.698128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:28.201 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.202 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.459 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:28.459 "name": "Existed_Raid", 01:23:28.459 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:28.459 "strip_size_kb": 64, 01:23:28.459 "state": "configuring", 01:23:28.459 "raid_level": "concat", 01:23:28.459 "superblock": true, 01:23:28.459 "num_base_bdevs": 3, 01:23:28.459 "num_base_bdevs_discovered": 1, 01:23:28.459 "num_base_bdevs_operational": 3, 01:23:28.459 "base_bdevs_list": [ 01:23:28.459 { 01:23:28.459 "name": null, 01:23:28.459 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:28.459 "is_configured": false, 01:23:28.459 "data_offset": 0, 01:23:28.459 "data_size": 63488 01:23:28.459 }, 01:23:28.459 { 01:23:28.459 "name": null, 01:23:28.459 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:28.459 "is_configured": false, 01:23:28.459 "data_offset": 0, 01:23:28.459 "data_size": 63488 01:23:28.459 }, 01:23:28.459 { 01:23:28.459 "name": "BaseBdev3", 01:23:28.459 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:28.459 "is_configured": true, 01:23:28.459 "data_offset": 2048, 01:23:28.459 "data_size": 63488 01:23:28.459 } 01:23:28.459 ] 01:23:28.459 }' 01:23:28.459 05:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:28.459 05:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.717 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:28.717 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.717 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.717 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:23:28.717 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.975 [2024-12-09 05:18:20.363316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.975 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:28.976 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:28.976 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.976 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:28.976 "name": "Existed_Raid", 01:23:28.976 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:28.976 "strip_size_kb": 64, 01:23:28.976 "state": "configuring", 01:23:28.976 "raid_level": "concat", 01:23:28.976 "superblock": true, 01:23:28.976 "num_base_bdevs": 3, 01:23:28.976 "num_base_bdevs_discovered": 2, 01:23:28.976 "num_base_bdevs_operational": 3, 01:23:28.976 "base_bdevs_list": [ 01:23:28.976 { 01:23:28.976 "name": null, 01:23:28.976 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:28.976 "is_configured": false, 01:23:28.976 "data_offset": 0, 01:23:28.976 "data_size": 63488 01:23:28.976 }, 01:23:28.976 { 01:23:28.976 "name": "BaseBdev2", 01:23:28.976 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:28.976 "is_configured": true, 01:23:28.976 "data_offset": 2048, 01:23:28.976 "data_size": 63488 01:23:28.976 }, 01:23:28.976 { 01:23:28.976 "name": "BaseBdev3", 01:23:28.976 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:28.976 "is_configured": true, 01:23:28.976 "data_offset": 2048, 01:23:28.976 "data_size": 63488 01:23:28.976 } 01:23:28.976 ] 01:23:28.976 }' 01:23:28.976 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:28.976 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.542 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:29.542 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.542 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:23:29.542 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u da552f12-6e07-4847-a07c-cd6b6f5419f9 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.543 05:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.543 [2024-12-09 05:18:21.022730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:23:29.543 [2024-12-09 05:18:21.023293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:23:29.543 [2024-12-09 05:18:21.023327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:29.543 NewBaseBdev 01:23:29.543 [2024-12-09 05:18:21.023674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:23:29.543 [2024-12-09 05:18:21.023870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:23:29.543 [2024-12-09 05:18:21.023887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:23:29.543 [2024-12-09 05:18:21.024066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.543 [ 01:23:29.543 { 01:23:29.543 "name": "NewBaseBdev", 01:23:29.543 "aliases": [ 01:23:29.543 "da552f12-6e07-4847-a07c-cd6b6f5419f9" 01:23:29.543 ], 01:23:29.543 "product_name": "Malloc disk", 01:23:29.543 "block_size": 512, 01:23:29.543 "num_blocks": 65536, 01:23:29.543 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:29.543 "assigned_rate_limits": { 01:23:29.543 "rw_ios_per_sec": 0, 01:23:29.543 "rw_mbytes_per_sec": 0, 01:23:29.543 "r_mbytes_per_sec": 0, 01:23:29.543 "w_mbytes_per_sec": 0 01:23:29.543 }, 01:23:29.543 "claimed": true, 01:23:29.543 "claim_type": "exclusive_write", 01:23:29.543 "zoned": false, 01:23:29.543 "supported_io_types": { 01:23:29.543 "read": true, 01:23:29.543 "write": true, 01:23:29.543 "unmap": true, 01:23:29.543 "flush": true, 01:23:29.543 "reset": true, 01:23:29.543 "nvme_admin": false, 01:23:29.543 "nvme_io": false, 01:23:29.543 "nvme_io_md": false, 01:23:29.543 "write_zeroes": true, 01:23:29.543 "zcopy": true, 01:23:29.543 "get_zone_info": false, 01:23:29.543 "zone_management": false, 01:23:29.543 "zone_append": false, 01:23:29.543 "compare": false, 01:23:29.543 "compare_and_write": false, 01:23:29.543 "abort": true, 01:23:29.543 "seek_hole": false, 01:23:29.543 "seek_data": false, 01:23:29.543 "copy": true, 01:23:29.543 "nvme_iov_md": false 01:23:29.543 }, 01:23:29.543 "memory_domains": [ 01:23:29.543 { 01:23:29.543 "dma_device_id": "system", 01:23:29.543 "dma_device_type": 1 01:23:29.543 }, 01:23:29.543 { 01:23:29.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:29.543 "dma_device_type": 2 01:23:29.543 } 01:23:29.543 ], 01:23:29.543 "driver_specific": {} 01:23:29.543 } 01:23:29.543 ] 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:29.543 "name": "Existed_Raid", 01:23:29.543 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:29.543 "strip_size_kb": 64, 01:23:29.543 "state": "online", 01:23:29.543 "raid_level": "concat", 01:23:29.543 "superblock": true, 01:23:29.543 "num_base_bdevs": 3, 01:23:29.543 "num_base_bdevs_discovered": 3, 01:23:29.543 "num_base_bdevs_operational": 3, 01:23:29.543 "base_bdevs_list": [ 01:23:29.543 { 01:23:29.543 "name": "NewBaseBdev", 01:23:29.543 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:29.543 "is_configured": true, 01:23:29.543 "data_offset": 2048, 01:23:29.543 "data_size": 63488 01:23:29.543 }, 01:23:29.543 { 01:23:29.543 "name": "BaseBdev2", 01:23:29.543 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:29.543 "is_configured": true, 01:23:29.543 "data_offset": 2048, 01:23:29.543 "data_size": 63488 01:23:29.543 }, 01:23:29.543 { 01:23:29.543 "name": "BaseBdev3", 01:23:29.543 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:29.543 "is_configured": true, 01:23:29.543 "data_offset": 2048, 01:23:29.543 "data_size": 63488 01:23:29.543 } 01:23:29.543 ] 01:23:29.543 }' 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:29.543 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:30.110 [2024-12-09 05:18:21.591304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:30.110 "name": "Existed_Raid", 01:23:30.110 "aliases": [ 01:23:30.110 "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c" 01:23:30.110 ], 01:23:30.110 "product_name": "Raid Volume", 01:23:30.110 "block_size": 512, 01:23:30.110 "num_blocks": 190464, 01:23:30.110 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:30.110 "assigned_rate_limits": { 01:23:30.110 "rw_ios_per_sec": 0, 01:23:30.110 "rw_mbytes_per_sec": 0, 01:23:30.110 "r_mbytes_per_sec": 0, 01:23:30.110 "w_mbytes_per_sec": 0 01:23:30.110 }, 01:23:30.110 "claimed": false, 01:23:30.110 "zoned": false, 01:23:30.110 "supported_io_types": { 01:23:30.110 "read": true, 01:23:30.110 "write": true, 01:23:30.110 "unmap": true, 01:23:30.110 "flush": true, 01:23:30.110 "reset": true, 01:23:30.110 "nvme_admin": false, 01:23:30.110 "nvme_io": false, 01:23:30.110 "nvme_io_md": false, 01:23:30.110 "write_zeroes": true, 01:23:30.110 "zcopy": false, 01:23:30.110 "get_zone_info": false, 01:23:30.110 "zone_management": false, 01:23:30.110 "zone_append": false, 01:23:30.110 "compare": false, 01:23:30.110 "compare_and_write": false, 01:23:30.110 "abort": false, 01:23:30.110 "seek_hole": false, 01:23:30.110 "seek_data": false, 01:23:30.110 "copy": false, 01:23:30.110 "nvme_iov_md": false 01:23:30.110 }, 01:23:30.110 "memory_domains": [ 01:23:30.110 { 01:23:30.110 "dma_device_id": "system", 01:23:30.110 "dma_device_type": 1 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:30.110 "dma_device_type": 2 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "dma_device_id": "system", 01:23:30.110 "dma_device_type": 1 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:30.110 "dma_device_type": 2 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "dma_device_id": "system", 01:23:30.110 "dma_device_type": 1 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:30.110 "dma_device_type": 2 01:23:30.110 } 01:23:30.110 ], 01:23:30.110 "driver_specific": { 01:23:30.110 "raid": { 01:23:30.110 "uuid": "fcd0dba4-af27-4207-9ce6-3ef3ad64b32c", 01:23:30.110 "strip_size_kb": 64, 01:23:30.110 "state": "online", 01:23:30.110 "raid_level": "concat", 01:23:30.110 "superblock": true, 01:23:30.110 "num_base_bdevs": 3, 01:23:30.110 "num_base_bdevs_discovered": 3, 01:23:30.110 "num_base_bdevs_operational": 3, 01:23:30.110 "base_bdevs_list": [ 01:23:30.110 { 01:23:30.110 "name": "NewBaseBdev", 01:23:30.110 "uuid": "da552f12-6e07-4847-a07c-cd6b6f5419f9", 01:23:30.110 "is_configured": true, 01:23:30.110 "data_offset": 2048, 01:23:30.110 "data_size": 63488 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "name": "BaseBdev2", 01:23:30.110 "uuid": "be23ce03-cae4-4cf8-8540-d5e3aabe144e", 01:23:30.110 "is_configured": true, 01:23:30.110 "data_offset": 2048, 01:23:30.110 "data_size": 63488 01:23:30.110 }, 01:23:30.110 { 01:23:30.110 "name": "BaseBdev3", 01:23:30.110 "uuid": "e4db5db0-2276-4c47-94c4-09ff3b9d5d0a", 01:23:30.110 "is_configured": true, 01:23:30.110 "data_offset": 2048, 01:23:30.110 "data_size": 63488 01:23:30.110 } 01:23:30.110 ] 01:23:30.110 } 01:23:30.110 } 01:23:30.110 }' 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:23:30.110 BaseBdev2 01:23:30.110 BaseBdev3' 01:23:30.110 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:30.369 [2024-12-09 05:18:21.919048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:30.369 [2024-12-09 05:18:21.919238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:30.369 [2024-12-09 05:18:21.919390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:30.369 [2024-12-09 05:18:21.919473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:30.369 [2024-12-09 05:18:21.919495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66168 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66168 ']' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66168 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66168 01:23:30.369 killing process with pid 66168 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66168' 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66168 01:23:30.369 [2024-12-09 05:18:21.954971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:30.369 05:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66168 01:23:30.627 [2024-12-09 05:18:22.237834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:32.001 05:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:23:32.001 01:23:32.001 real 0m11.995s 01:23:32.001 user 0m19.628s 01:23:32.001 sys 0m1.792s 01:23:32.001 05:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:32.001 ************************************ 01:23:32.001 END TEST raid_state_function_test_sb 01:23:32.001 ************************************ 01:23:32.001 05:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:23:32.001 05:18:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 01:23:32.001 05:18:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:23:32.001 05:18:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:32.001 05:18:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:32.001 ************************************ 01:23:32.001 START TEST raid_superblock_test 01:23:32.001 ************************************ 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66802 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66802 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66802 ']' 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:32.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:32.001 05:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:32.001 [2024-12-09 05:18:23.556394] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:32.001 [2024-12-09 05:18:23.556811] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66802 ] 01:23:32.260 [2024-12-09 05:18:23.731271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:32.530 [2024-12-09 05:18:23.891693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:32.817 [2024-12-09 05:18:24.150159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:32.817 [2024-12-09 05:18:24.150615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.076 malloc1 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.076 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.076 [2024-12-09 05:18:24.686753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:23:33.076 [2024-12-09 05:18:24.687168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:33.076 [2024-12-09 05:18:24.687253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:23:33.076 [2024-12-09 05:18:24.687571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:33.335 [2024-12-09 05:18:24.691407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:33.335 [2024-12-09 05:18:24.691545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:23:33.335 pt1 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.335 malloc2 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.335 [2024-12-09 05:18:24.756320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:23:33.335 [2024-12-09 05:18:24.756701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:33.335 [2024-12-09 05:18:24.756802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:23:33.335 [2024-12-09 05:18:24.756985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:33.335 [2024-12-09 05:18:24.760123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:33.335 [2024-12-09 05:18:24.760289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:23:33.335 pt2 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.335 malloc3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.335 [2024-12-09 05:18:24.829721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:23:33.335 [2024-12-09 05:18:24.830054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:33.335 [2024-12-09 05:18:24.830105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:23:33.335 [2024-12-09 05:18:24.830124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:33.335 [2024-12-09 05:18:24.833192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:33.335 [2024-12-09 05:18:24.833386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:23:33.335 pt3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.335 [2024-12-09 05:18:24.841804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:23:33.335 [2024-12-09 05:18:24.844570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:23:33.335 [2024-12-09 05:18:24.844665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:23:33.335 [2024-12-09 05:18:24.844882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:23:33.335 [2024-12-09 05:18:24.844905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:33.335 [2024-12-09 05:18:24.845203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:23:33.335 [2024-12-09 05:18:24.845445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:23:33.335 [2024-12-09 05:18:24.845462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:23:33.335 [2024-12-09 05:18:24.845732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:33.335 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:33.336 "name": "raid_bdev1", 01:23:33.336 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:33.336 "strip_size_kb": 64, 01:23:33.336 "state": "online", 01:23:33.336 "raid_level": "concat", 01:23:33.336 "superblock": true, 01:23:33.336 "num_base_bdevs": 3, 01:23:33.336 "num_base_bdevs_discovered": 3, 01:23:33.336 "num_base_bdevs_operational": 3, 01:23:33.336 "base_bdevs_list": [ 01:23:33.336 { 01:23:33.336 "name": "pt1", 01:23:33.336 "uuid": "00000000-0000-0000-0000-000000000001", 01:23:33.336 "is_configured": true, 01:23:33.336 "data_offset": 2048, 01:23:33.336 "data_size": 63488 01:23:33.336 }, 01:23:33.336 { 01:23:33.336 "name": "pt2", 01:23:33.336 "uuid": "00000000-0000-0000-0000-000000000002", 01:23:33.336 "is_configured": true, 01:23:33.336 "data_offset": 2048, 01:23:33.336 "data_size": 63488 01:23:33.336 }, 01:23:33.336 { 01:23:33.336 "name": "pt3", 01:23:33.336 "uuid": "00000000-0000-0000-0000-000000000003", 01:23:33.336 "is_configured": true, 01:23:33.336 "data_offset": 2048, 01:23:33.336 "data_size": 63488 01:23:33.336 } 01:23:33.336 ] 01:23:33.336 }' 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:33.336 05:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.903 [2024-12-09 05:18:25.334460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:33.903 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:33.903 "name": "raid_bdev1", 01:23:33.903 "aliases": [ 01:23:33.903 "3610634b-0441-4d45-bb6a-3e646f3b0a5f" 01:23:33.903 ], 01:23:33.903 "product_name": "Raid Volume", 01:23:33.903 "block_size": 512, 01:23:33.903 "num_blocks": 190464, 01:23:33.903 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:33.903 "assigned_rate_limits": { 01:23:33.903 "rw_ios_per_sec": 0, 01:23:33.903 "rw_mbytes_per_sec": 0, 01:23:33.903 "r_mbytes_per_sec": 0, 01:23:33.903 "w_mbytes_per_sec": 0 01:23:33.903 }, 01:23:33.903 "claimed": false, 01:23:33.903 "zoned": false, 01:23:33.903 "supported_io_types": { 01:23:33.903 "read": true, 01:23:33.903 "write": true, 01:23:33.903 "unmap": true, 01:23:33.903 "flush": true, 01:23:33.903 "reset": true, 01:23:33.903 "nvme_admin": false, 01:23:33.903 "nvme_io": false, 01:23:33.903 "nvme_io_md": false, 01:23:33.903 "write_zeroes": true, 01:23:33.903 "zcopy": false, 01:23:33.903 "get_zone_info": false, 01:23:33.904 "zone_management": false, 01:23:33.904 "zone_append": false, 01:23:33.904 "compare": false, 01:23:33.904 "compare_and_write": false, 01:23:33.904 "abort": false, 01:23:33.904 "seek_hole": false, 01:23:33.904 "seek_data": false, 01:23:33.904 "copy": false, 01:23:33.904 "nvme_iov_md": false 01:23:33.904 }, 01:23:33.904 "memory_domains": [ 01:23:33.904 { 01:23:33.904 "dma_device_id": "system", 01:23:33.904 "dma_device_type": 1 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:33.904 "dma_device_type": 2 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "dma_device_id": "system", 01:23:33.904 "dma_device_type": 1 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:33.904 "dma_device_type": 2 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "dma_device_id": "system", 01:23:33.904 "dma_device_type": 1 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:33.904 "dma_device_type": 2 01:23:33.904 } 01:23:33.904 ], 01:23:33.904 "driver_specific": { 01:23:33.904 "raid": { 01:23:33.904 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:33.904 "strip_size_kb": 64, 01:23:33.904 "state": "online", 01:23:33.904 "raid_level": "concat", 01:23:33.904 "superblock": true, 01:23:33.904 "num_base_bdevs": 3, 01:23:33.904 "num_base_bdevs_discovered": 3, 01:23:33.904 "num_base_bdevs_operational": 3, 01:23:33.904 "base_bdevs_list": [ 01:23:33.904 { 01:23:33.904 "name": "pt1", 01:23:33.904 "uuid": "00000000-0000-0000-0000-000000000001", 01:23:33.904 "is_configured": true, 01:23:33.904 "data_offset": 2048, 01:23:33.904 "data_size": 63488 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "name": "pt2", 01:23:33.904 "uuid": "00000000-0000-0000-0000-000000000002", 01:23:33.904 "is_configured": true, 01:23:33.904 "data_offset": 2048, 01:23:33.904 "data_size": 63488 01:23:33.904 }, 01:23:33.904 { 01:23:33.904 "name": "pt3", 01:23:33.904 "uuid": "00000000-0000-0000-0000-000000000003", 01:23:33.904 "is_configured": true, 01:23:33.904 "data_offset": 2048, 01:23:33.904 "data_size": 63488 01:23:33.904 } 01:23:33.904 ] 01:23:33.904 } 01:23:33.904 } 01:23:33.904 }' 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:23:33.904 pt2 01:23:33.904 pt3' 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:33.904 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:34.162 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.163 [2024-12-09 05:18:25.658542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3610634b-0441-4d45-bb6a-3e646f3b0a5f 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3610634b-0441-4d45-bb6a-3e646f3b0a5f ']' 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.163 [2024-12-09 05:18:25.706270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:23:34.163 [2024-12-09 05:18:25.706620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:34.163 [2024-12-09 05:18:25.706869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:34.163 [2024-12-09 05:18:25.707113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:34.163 [2024-12-09 05:18:25.707269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.163 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 [2024-12-09 05:18:25.850458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:23:34.421 [2024-12-09 05:18:25.853719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:23:34.421 [2024-12-09 05:18:25.853798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:23:34.421 [2024-12-09 05:18:25.853896] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:23:34.421 [2024-12-09 05:18:25.854023] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:23:34.421 [2024-12-09 05:18:25.854058] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:23:34.421 [2024-12-09 05:18:25.854088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:23:34.421 [2024-12-09 05:18:25.854103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:23:34.421 request: 01:23:34.421 { 01:23:34.421 "name": "raid_bdev1", 01:23:34.421 "raid_level": "concat", 01:23:34.421 "base_bdevs": [ 01:23:34.421 "malloc1", 01:23:34.421 "malloc2", 01:23:34.421 "malloc3" 01:23:34.421 ], 01:23:34.421 "strip_size_kb": 64, 01:23:34.421 "superblock": false, 01:23:34.421 "method": "bdev_raid_create", 01:23:34.421 "req_id": 1 01:23:34.421 } 01:23:34.421 Got JSON-RPC error response 01:23:34.421 response: 01:23:34.421 { 01:23:34.421 "code": -17, 01:23:34.421 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:23:34.421 } 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 [2024-12-09 05:18:25.914528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:23:34.421 [2024-12-09 05:18:25.914940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:34.421 [2024-12-09 05:18:25.915034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:23:34.421 [2024-12-09 05:18:25.915161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:34.421 [2024-12-09 05:18:25.918600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:34.421 [2024-12-09 05:18:25.918755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:23:34.421 [2024-12-09 05:18:25.919035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:23:34.421 [2024-12-09 05:18:25.919235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:23:34.421 pt1 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:34.421 "name": "raid_bdev1", 01:23:34.421 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:34.421 "strip_size_kb": 64, 01:23:34.421 "state": "configuring", 01:23:34.421 "raid_level": "concat", 01:23:34.421 "superblock": true, 01:23:34.421 "num_base_bdevs": 3, 01:23:34.421 "num_base_bdevs_discovered": 1, 01:23:34.421 "num_base_bdevs_operational": 3, 01:23:34.421 "base_bdevs_list": [ 01:23:34.421 { 01:23:34.421 "name": "pt1", 01:23:34.421 "uuid": "00000000-0000-0000-0000-000000000001", 01:23:34.421 "is_configured": true, 01:23:34.421 "data_offset": 2048, 01:23:34.421 "data_size": 63488 01:23:34.421 }, 01:23:34.421 { 01:23:34.421 "name": null, 01:23:34.421 "uuid": "00000000-0000-0000-0000-000000000002", 01:23:34.421 "is_configured": false, 01:23:34.421 "data_offset": 2048, 01:23:34.421 "data_size": 63488 01:23:34.421 }, 01:23:34.421 { 01:23:34.421 "name": null, 01:23:34.421 "uuid": "00000000-0000-0000-0000-000000000003", 01:23:34.421 "is_configured": false, 01:23:34.421 "data_offset": 2048, 01:23:34.421 "data_size": 63488 01:23:34.421 } 01:23:34.421 ] 01:23:34.421 }' 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:34.421 05:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.988 [2024-12-09 05:18:26.415342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:23:34.988 [2024-12-09 05:18:26.415533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:34.988 [2024-12-09 05:18:26.415581] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 01:23:34.988 [2024-12-09 05:18:26.415598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:34.988 [2024-12-09 05:18:26.416279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:34.988 [2024-12-09 05:18:26.416312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:23:34.988 [2024-12-09 05:18:26.416458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:23:34.988 [2024-12-09 05:18:26.416504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:23:34.988 pt2 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.988 [2024-12-09 05:18:26.423236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:34.988 "name": "raid_bdev1", 01:23:34.988 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:34.988 "strip_size_kb": 64, 01:23:34.988 "state": "configuring", 01:23:34.988 "raid_level": "concat", 01:23:34.988 "superblock": true, 01:23:34.988 "num_base_bdevs": 3, 01:23:34.988 "num_base_bdevs_discovered": 1, 01:23:34.988 "num_base_bdevs_operational": 3, 01:23:34.988 "base_bdevs_list": [ 01:23:34.988 { 01:23:34.988 "name": "pt1", 01:23:34.988 "uuid": "00000000-0000-0000-0000-000000000001", 01:23:34.988 "is_configured": true, 01:23:34.988 "data_offset": 2048, 01:23:34.988 "data_size": 63488 01:23:34.988 }, 01:23:34.988 { 01:23:34.988 "name": null, 01:23:34.988 "uuid": "00000000-0000-0000-0000-000000000002", 01:23:34.988 "is_configured": false, 01:23:34.988 "data_offset": 0, 01:23:34.988 "data_size": 63488 01:23:34.988 }, 01:23:34.988 { 01:23:34.988 "name": null, 01:23:34.988 "uuid": "00000000-0000-0000-0000-000000000003", 01:23:34.988 "is_configured": false, 01:23:34.988 "data_offset": 2048, 01:23:34.988 "data_size": 63488 01:23:34.988 } 01:23:34.988 ] 01:23:34.988 }' 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:34.988 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:35.556 [2024-12-09 05:18:26.951436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:23:35.556 [2024-12-09 05:18:26.951578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:35.556 [2024-12-09 05:18:26.951614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 01:23:35.556 [2024-12-09 05:18:26.951633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:35.556 [2024-12-09 05:18:26.952345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:35.556 [2024-12-09 05:18:26.952398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:23:35.556 [2024-12-09 05:18:26.952518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:23:35.556 [2024-12-09 05:18:26.952560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:23:35.556 pt2 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:35.556 [2024-12-09 05:18:26.959335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:23:35.556 [2024-12-09 05:18:26.959405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:35.556 [2024-12-09 05:18:26.959430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:23:35.556 [2024-12-09 05:18:26.959448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:35.556 [2024-12-09 05:18:26.959931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:35.556 [2024-12-09 05:18:26.959973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:23:35.556 [2024-12-09 05:18:26.960051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:23:35.556 [2024-12-09 05:18:26.960084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:23:35.556 [2024-12-09 05:18:26.960240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:23:35.556 [2024-12-09 05:18:26.960263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:35.556 [2024-12-09 05:18:26.960612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:23:35.556 [2024-12-09 05:18:26.960817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:23:35.556 [2024-12-09 05:18:26.960834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:23:35.556 [2024-12-09 05:18:26.961007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:35.556 pt3 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:35.556 05:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:35.556 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:35.556 "name": "raid_bdev1", 01:23:35.556 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:35.556 "strip_size_kb": 64, 01:23:35.556 "state": "online", 01:23:35.556 "raid_level": "concat", 01:23:35.556 "superblock": true, 01:23:35.556 "num_base_bdevs": 3, 01:23:35.556 "num_base_bdevs_discovered": 3, 01:23:35.556 "num_base_bdevs_operational": 3, 01:23:35.556 "base_bdevs_list": [ 01:23:35.556 { 01:23:35.556 "name": "pt1", 01:23:35.556 "uuid": "00000000-0000-0000-0000-000000000001", 01:23:35.556 "is_configured": true, 01:23:35.556 "data_offset": 2048, 01:23:35.556 "data_size": 63488 01:23:35.556 }, 01:23:35.556 { 01:23:35.556 "name": "pt2", 01:23:35.556 "uuid": "00000000-0000-0000-0000-000000000002", 01:23:35.556 "is_configured": true, 01:23:35.556 "data_offset": 2048, 01:23:35.556 "data_size": 63488 01:23:35.556 }, 01:23:35.556 { 01:23:35.556 "name": "pt3", 01:23:35.556 "uuid": "00000000-0000-0000-0000-000000000003", 01:23:35.556 "is_configured": true, 01:23:35.556 "data_offset": 2048, 01:23:35.556 "data_size": 63488 01:23:35.556 } 01:23:35.556 ] 01:23:35.556 }' 01:23:35.556 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:35.556 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.122 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:36.123 [2024-12-09 05:18:27.516051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:36.123 "name": "raid_bdev1", 01:23:36.123 "aliases": [ 01:23:36.123 "3610634b-0441-4d45-bb6a-3e646f3b0a5f" 01:23:36.123 ], 01:23:36.123 "product_name": "Raid Volume", 01:23:36.123 "block_size": 512, 01:23:36.123 "num_blocks": 190464, 01:23:36.123 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:36.123 "assigned_rate_limits": { 01:23:36.123 "rw_ios_per_sec": 0, 01:23:36.123 "rw_mbytes_per_sec": 0, 01:23:36.123 "r_mbytes_per_sec": 0, 01:23:36.123 "w_mbytes_per_sec": 0 01:23:36.123 }, 01:23:36.123 "claimed": false, 01:23:36.123 "zoned": false, 01:23:36.123 "supported_io_types": { 01:23:36.123 "read": true, 01:23:36.123 "write": true, 01:23:36.123 "unmap": true, 01:23:36.123 "flush": true, 01:23:36.123 "reset": true, 01:23:36.123 "nvme_admin": false, 01:23:36.123 "nvme_io": false, 01:23:36.123 "nvme_io_md": false, 01:23:36.123 "write_zeroes": true, 01:23:36.123 "zcopy": false, 01:23:36.123 "get_zone_info": false, 01:23:36.123 "zone_management": false, 01:23:36.123 "zone_append": false, 01:23:36.123 "compare": false, 01:23:36.123 "compare_and_write": false, 01:23:36.123 "abort": false, 01:23:36.123 "seek_hole": false, 01:23:36.123 "seek_data": false, 01:23:36.123 "copy": false, 01:23:36.123 "nvme_iov_md": false 01:23:36.123 }, 01:23:36.123 "memory_domains": [ 01:23:36.123 { 01:23:36.123 "dma_device_id": "system", 01:23:36.123 "dma_device_type": 1 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:36.123 "dma_device_type": 2 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "dma_device_id": "system", 01:23:36.123 "dma_device_type": 1 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:36.123 "dma_device_type": 2 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "dma_device_id": "system", 01:23:36.123 "dma_device_type": 1 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:36.123 "dma_device_type": 2 01:23:36.123 } 01:23:36.123 ], 01:23:36.123 "driver_specific": { 01:23:36.123 "raid": { 01:23:36.123 "uuid": "3610634b-0441-4d45-bb6a-3e646f3b0a5f", 01:23:36.123 "strip_size_kb": 64, 01:23:36.123 "state": "online", 01:23:36.123 "raid_level": "concat", 01:23:36.123 "superblock": true, 01:23:36.123 "num_base_bdevs": 3, 01:23:36.123 "num_base_bdevs_discovered": 3, 01:23:36.123 "num_base_bdevs_operational": 3, 01:23:36.123 "base_bdevs_list": [ 01:23:36.123 { 01:23:36.123 "name": "pt1", 01:23:36.123 "uuid": "00000000-0000-0000-0000-000000000001", 01:23:36.123 "is_configured": true, 01:23:36.123 "data_offset": 2048, 01:23:36.123 "data_size": 63488 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "name": "pt2", 01:23:36.123 "uuid": "00000000-0000-0000-0000-000000000002", 01:23:36.123 "is_configured": true, 01:23:36.123 "data_offset": 2048, 01:23:36.123 "data_size": 63488 01:23:36.123 }, 01:23:36.123 { 01:23:36.123 "name": "pt3", 01:23:36.123 "uuid": "00000000-0000-0000-0000-000000000003", 01:23:36.123 "is_configured": true, 01:23:36.123 "data_offset": 2048, 01:23:36.123 "data_size": 63488 01:23:36.123 } 01:23:36.123 ] 01:23:36.123 } 01:23:36.123 } 01:23:36.123 }' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:23:36.123 pt2 01:23:36.123 pt3' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:36.123 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:36.383 [2024-12-09 05:18:27.832064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3610634b-0441-4d45-bb6a-3e646f3b0a5f '!=' 3610634b-0441-4d45-bb6a-3e646f3b0a5f ']' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66802 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66802 ']' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66802 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66802 01:23:36.383 killing process with pid 66802 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66802' 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66802 01:23:36.383 05:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66802 01:23:36.383 [2024-12-09 05:18:27.907554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:36.383 [2024-12-09 05:18:27.907747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:36.383 [2024-12-09 05:18:27.907849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:36.383 [2024-12-09 05:18:27.907889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:23:36.641 [2024-12-09 05:18:28.225628] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:38.538 05:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:23:38.538 01:23:38.538 real 0m6.234s 01:23:38.538 user 0m9.056s 01:23:38.538 sys 0m0.925s 01:23:38.538 05:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:38.538 05:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:23:38.538 ************************************ 01:23:38.538 END TEST raid_superblock_test 01:23:38.538 ************************************ 01:23:38.538 05:18:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 01:23:38.538 05:18:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:23:38.538 05:18:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:38.538 05:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:38.538 ************************************ 01:23:38.538 START TEST raid_read_error_test 01:23:38.538 ************************************ 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:23:38.538 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qwyzYyHxDk 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67068 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67068 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67068 ']' 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:38.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:38.539 05:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:38.539 [2024-12-09 05:18:29.912200] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:38.539 [2024-12-09 05:18:29.914202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67068 ] 01:23:38.539 [2024-12-09 05:18:30.104080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:38.796 [2024-12-09 05:18:30.251047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:39.054 [2024-12-09 05:18:30.479071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:39.054 [2024-12-09 05:18:30.479168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.315 BaseBdev1_malloc 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.315 true 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.315 [2024-12-09 05:18:30.893126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:23:39.315 [2024-12-09 05:18:30.893223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:39.315 [2024-12-09 05:18:30.893261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:23:39.315 [2024-12-09 05:18:30.893280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:39.315 [2024-12-09 05:18:30.896464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:39.315 [2024-12-09 05:18:30.896694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:23:39.315 BaseBdev1 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.315 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 BaseBdev2_malloc 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 true 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 [2024-12-09 05:18:30.962603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:23:39.585 [2024-12-09 05:18:30.962692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:39.585 [2024-12-09 05:18:30.962724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:23:39.585 [2024-12-09 05:18:30.962741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:39.585 [2024-12-09 05:18:30.965794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:39.585 [2024-12-09 05:18:30.965851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:23:39.585 BaseBdev2 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.585 05:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 BaseBdev3_malloc 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 true 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 [2024-12-09 05:18:31.046244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:23:39.585 [2024-12-09 05:18:31.046336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:39.585 [2024-12-09 05:18:31.046384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:23:39.585 [2024-12-09 05:18:31.046407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:39.585 [2024-12-09 05:18:31.049457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:39.585 [2024-12-09 05:18:31.049521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:23:39.585 BaseBdev3 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.585 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.585 [2024-12-09 05:18:31.058528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:39.586 [2024-12-09 05:18:31.061069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:39.586 [2024-12-09 05:18:31.061331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:39.586 [2024-12-09 05:18:31.061677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:23:39.586 [2024-12-09 05:18:31.061699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:39.586 [2024-12-09 05:18:31.062076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 01:23:39.586 [2024-12-09 05:18:31.062312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:23:39.586 [2024-12-09 05:18:31.062336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:23:39.586 [2024-12-09 05:18:31.062619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:39.586 "name": "raid_bdev1", 01:23:39.586 "uuid": "1bb27398-3eec-4369-bd0c-04de35b5506f", 01:23:39.586 "strip_size_kb": 64, 01:23:39.586 "state": "online", 01:23:39.586 "raid_level": "concat", 01:23:39.586 "superblock": true, 01:23:39.586 "num_base_bdevs": 3, 01:23:39.586 "num_base_bdevs_discovered": 3, 01:23:39.586 "num_base_bdevs_operational": 3, 01:23:39.586 "base_bdevs_list": [ 01:23:39.586 { 01:23:39.586 "name": "BaseBdev1", 01:23:39.586 "uuid": "c5bdaa62-9051-5ba4-b450-3db0dfbacfd2", 01:23:39.586 "is_configured": true, 01:23:39.586 "data_offset": 2048, 01:23:39.586 "data_size": 63488 01:23:39.586 }, 01:23:39.586 { 01:23:39.586 "name": "BaseBdev2", 01:23:39.586 "uuid": "9d23bbdb-65dd-55cc-9d7d-1596fa27f902", 01:23:39.586 "is_configured": true, 01:23:39.586 "data_offset": 2048, 01:23:39.586 "data_size": 63488 01:23:39.586 }, 01:23:39.586 { 01:23:39.586 "name": "BaseBdev3", 01:23:39.586 "uuid": "21abb9ea-068a-5a24-90a2-0caed950a421", 01:23:39.586 "is_configured": true, 01:23:39.586 "data_offset": 2048, 01:23:39.586 "data_size": 63488 01:23:39.586 } 01:23:39.586 ] 01:23:39.586 }' 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:39.586 05:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:40.150 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:23:40.151 05:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:23:40.151 [2024-12-09 05:18:31.684220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:41.083 "name": "raid_bdev1", 01:23:41.083 "uuid": "1bb27398-3eec-4369-bd0c-04de35b5506f", 01:23:41.083 "strip_size_kb": 64, 01:23:41.083 "state": "online", 01:23:41.083 "raid_level": "concat", 01:23:41.083 "superblock": true, 01:23:41.083 "num_base_bdevs": 3, 01:23:41.083 "num_base_bdevs_discovered": 3, 01:23:41.083 "num_base_bdevs_operational": 3, 01:23:41.083 "base_bdevs_list": [ 01:23:41.083 { 01:23:41.083 "name": "BaseBdev1", 01:23:41.083 "uuid": "c5bdaa62-9051-5ba4-b450-3db0dfbacfd2", 01:23:41.083 "is_configured": true, 01:23:41.083 "data_offset": 2048, 01:23:41.083 "data_size": 63488 01:23:41.083 }, 01:23:41.083 { 01:23:41.083 "name": "BaseBdev2", 01:23:41.083 "uuid": "9d23bbdb-65dd-55cc-9d7d-1596fa27f902", 01:23:41.083 "is_configured": true, 01:23:41.083 "data_offset": 2048, 01:23:41.083 "data_size": 63488 01:23:41.083 }, 01:23:41.083 { 01:23:41.083 "name": "BaseBdev3", 01:23:41.083 "uuid": "21abb9ea-068a-5a24-90a2-0caed950a421", 01:23:41.083 "is_configured": true, 01:23:41.083 "data_offset": 2048, 01:23:41.083 "data_size": 63488 01:23:41.083 } 01:23:41.083 ] 01:23:41.083 }' 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:41.083 05:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:41.648 [2024-12-09 05:18:33.065706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:23:41.648 [2024-12-09 05:18:33.065751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:41.648 [2024-12-09 05:18:33.069265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:41.648 [2024-12-09 05:18:33.069331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:41.648 [2024-12-09 05:18:33.069414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:41.648 [2024-12-09 05:18:33.069432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:23:41.648 { 01:23:41.648 "results": [ 01:23:41.648 { 01:23:41.648 "job": "raid_bdev1", 01:23:41.648 "core_mask": "0x1", 01:23:41.648 "workload": "randrw", 01:23:41.648 "percentage": 50, 01:23:41.648 "status": "finished", 01:23:41.648 "queue_depth": 1, 01:23:41.648 "io_size": 131072, 01:23:41.648 "runtime": 1.378675, 01:23:41.648 "iops": 9516.383484142383, 01:23:41.648 "mibps": 1189.547935517798, 01:23:41.648 "io_failed": 1, 01:23:41.648 "io_timeout": 0, 01:23:41.648 "avg_latency_us": 147.24316300725417, 01:23:41.648 "min_latency_us": 44.916363636363634, 01:23:41.648 "max_latency_us": 1832.0290909090909 01:23:41.648 } 01:23:41.648 ], 01:23:41.648 "core_count": 1 01:23:41.648 } 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67068 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67068 ']' 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67068 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67068 01:23:41.648 killing process with pid 67068 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67068' 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67068 01:23:41.648 05:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67068 01:23:41.648 [2024-12-09 05:18:33.104252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:41.907 [2024-12-09 05:18:33.326588] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qwyzYyHxDk 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:23:43.278 ************************************ 01:23:43.278 END TEST raid_read_error_test 01:23:43.278 ************************************ 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 01:23:43.278 01:23:43.278 real 0m4.833s 01:23:43.278 user 0m5.819s 01:23:43.278 sys 0m0.641s 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:43.278 05:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:43.278 05:18:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 01:23:43.278 05:18:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:23:43.278 05:18:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:43.278 05:18:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:43.278 ************************************ 01:23:43.278 START TEST raid_write_error_test 01:23:43.278 ************************************ 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ykGIvmP6Be 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67213 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67213 01:23:43.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67213 ']' 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:43.279 05:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:43.279 [2024-12-09 05:18:34.770245] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:43.279 [2024-12-09 05:18:34.770439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67213 ] 01:23:43.537 [2024-12-09 05:18:34.958507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:43.537 [2024-12-09 05:18:35.097546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:43.794 [2024-12-09 05:18:35.336874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:43.794 [2024-12-09 05:18:35.337281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.415 BaseBdev1_malloc 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.415 true 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.415 [2024-12-09 05:18:35.852335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:23:44.415 [2024-12-09 05:18:35.852539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:44.415 [2024-12-09 05:18:35.852579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:23:44.415 [2024-12-09 05:18:35.852599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:44.415 [2024-12-09 05:18:35.855675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:44.415 [2024-12-09 05:18:35.855892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:23:44.415 BaseBdev1 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.415 BaseBdev2_malloc 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.415 true 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.415 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.416 [2024-12-09 05:18:35.911742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:23:44.416 [2024-12-09 05:18:35.911827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:44.416 [2024-12-09 05:18:35.911863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:23:44.416 [2024-12-09 05:18:35.911883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:44.416 [2024-12-09 05:18:35.914995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:44.416 [2024-12-09 05:18:35.915245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:23:44.416 BaseBdev2 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.416 BaseBdev3_malloc 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.416 true 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.416 [2024-12-09 05:18:35.977979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:23:44.416 [2024-12-09 05:18:35.978215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:23:44.416 [2024-12-09 05:18:35.978262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:23:44.416 [2024-12-09 05:18:35.978283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:23:44.416 [2024-12-09 05:18:35.981528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:23:44.416 [2024-12-09 05:18:35.981588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:23:44.416 BaseBdev3 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.416 [2024-12-09 05:18:35.986140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:44.416 [2024-12-09 05:18:35.988858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:44.416 [2024-12-09 05:18:35.989129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:44.416 [2024-12-09 05:18:35.989482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:23:44.416 [2024-12-09 05:18:35.989518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:23:44.416 [2024-12-09 05:18:35.989915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 01:23:44.416 [2024-12-09 05:18:35.990179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:23:44.416 [2024-12-09 05:18:35.990204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:23:44.416 [2024-12-09 05:18:35.990549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.416 05:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:44.416 05:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.673 05:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:44.673 "name": "raid_bdev1", 01:23:44.673 "uuid": "97aad583-6f32-45de-88e5-196be930aea2", 01:23:44.673 "strip_size_kb": 64, 01:23:44.673 "state": "online", 01:23:44.673 "raid_level": "concat", 01:23:44.673 "superblock": true, 01:23:44.673 "num_base_bdevs": 3, 01:23:44.673 "num_base_bdevs_discovered": 3, 01:23:44.673 "num_base_bdevs_operational": 3, 01:23:44.673 "base_bdevs_list": [ 01:23:44.673 { 01:23:44.673 "name": "BaseBdev1", 01:23:44.674 "uuid": "0a4afa79-9045-50e6-92ea-0cc299b33743", 01:23:44.674 "is_configured": true, 01:23:44.674 "data_offset": 2048, 01:23:44.674 "data_size": 63488 01:23:44.674 }, 01:23:44.674 { 01:23:44.674 "name": "BaseBdev2", 01:23:44.674 "uuid": "c85b455d-ecf9-5b5b-ac0d-e516d9cd807e", 01:23:44.674 "is_configured": true, 01:23:44.674 "data_offset": 2048, 01:23:44.674 "data_size": 63488 01:23:44.674 }, 01:23:44.674 { 01:23:44.674 "name": "BaseBdev3", 01:23:44.674 "uuid": "807dae54-0430-5da6-9e07-072161095c2a", 01:23:44.674 "is_configured": true, 01:23:44.674 "data_offset": 2048, 01:23:44.674 "data_size": 63488 01:23:44.674 } 01:23:44.674 ] 01:23:44.674 }' 01:23:44.674 05:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:44.674 05:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:44.932 05:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:23:44.932 05:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:23:45.189 [2024-12-09 05:18:36.640751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:46.121 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:46.122 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:46.122 "name": "raid_bdev1", 01:23:46.122 "uuid": "97aad583-6f32-45de-88e5-196be930aea2", 01:23:46.122 "strip_size_kb": 64, 01:23:46.122 "state": "online", 01:23:46.122 "raid_level": "concat", 01:23:46.122 "superblock": true, 01:23:46.122 "num_base_bdevs": 3, 01:23:46.122 "num_base_bdevs_discovered": 3, 01:23:46.122 "num_base_bdevs_operational": 3, 01:23:46.122 "base_bdevs_list": [ 01:23:46.122 { 01:23:46.122 "name": "BaseBdev1", 01:23:46.122 "uuid": "0a4afa79-9045-50e6-92ea-0cc299b33743", 01:23:46.122 "is_configured": true, 01:23:46.122 "data_offset": 2048, 01:23:46.122 "data_size": 63488 01:23:46.122 }, 01:23:46.122 { 01:23:46.122 "name": "BaseBdev2", 01:23:46.122 "uuid": "c85b455d-ecf9-5b5b-ac0d-e516d9cd807e", 01:23:46.122 "is_configured": true, 01:23:46.122 "data_offset": 2048, 01:23:46.122 "data_size": 63488 01:23:46.122 }, 01:23:46.122 { 01:23:46.122 "name": "BaseBdev3", 01:23:46.122 "uuid": "807dae54-0430-5da6-9e07-072161095c2a", 01:23:46.122 "is_configured": true, 01:23:46.122 "data_offset": 2048, 01:23:46.122 "data_size": 63488 01:23:46.122 } 01:23:46.122 ] 01:23:46.122 }' 01:23:46.122 05:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:46.122 05:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:46.688 [2024-12-09 05:18:38.108698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:23:46.688 [2024-12-09 05:18:38.108755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:46.688 [2024-12-09 05:18:38.112550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:46.688 [2024-12-09 05:18:38.112811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:46.688 [2024-12-09 05:18:38.113003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:46.688 [2024-12-09 05:18:38.113169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 01:23:46.688 "results": [ 01:23:46.688 { 01:23:46.688 "job": "raid_bdev1", 01:23:46.688 "core_mask": "0x1", 01:23:46.688 "workload": "randrw", 01:23:46.688 "percentage": 50, 01:23:46.688 "status": "finished", 01:23:46.688 "queue_depth": 1, 01:23:46.688 "io_size": 131072, 01:23:46.688 "runtime": 1.464777, 01:23:46.688 "iops": 8928.321512421344, 01:23:46.688 "mibps": 1116.040189052668, 01:23:46.688 "io_failed": 1, 01:23:46.688 "io_timeout": 0, 01:23:46.688 "avg_latency_us": 157.21847402845646, 01:23:46.688 "min_latency_us": 44.916363636363634, 01:23:46.688 "max_latency_us": 1876.7127272727273 01:23:46.688 } 01:23:46.688 ], 01:23:46.688 "core_count": 1 01:23:46.688 } 01:23:46.688 te offline 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67213 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67213 ']' 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67213 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67213 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:46.688 killing process with pid 67213 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67213' 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67213 01:23:46.688 [2024-12-09 05:18:38.165829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:46.688 05:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67213 01:23:46.946 [2024-12-09 05:18:38.388377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ykGIvmP6Be 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 01:23:48.322 01:23:48.322 real 0m4.968s 01:23:48.322 user 0m6.149s 01:23:48.322 sys 0m0.666s 01:23:48.322 ************************************ 01:23:48.322 END TEST raid_write_error_test 01:23:48.322 ************************************ 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:48.322 05:18:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:23:48.322 05:18:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:23:48.322 05:18:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 01:23:48.322 05:18:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:23:48.322 05:18:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:48.322 05:18:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:23:48.322 ************************************ 01:23:48.322 START TEST raid_state_function_test 01:23:48.322 ************************************ 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:23:48.322 Process raid pid: 67357 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67357 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67357' 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67357 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67357 ']' 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:48.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:48.322 05:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:48.322 [2024-12-09 05:18:39.778494] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:23:48.322 [2024-12-09 05:18:39.778929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:48.582 [2024-12-09 05:18:39.972153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:48.582 [2024-12-09 05:18:40.142403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:48.841 [2024-12-09 05:18:40.372807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:48.841 [2024-12-09 05:18:40.372875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.100 [2024-12-09 05:18:40.705091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:49.100 [2024-12-09 05:18:40.705402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:49.100 [2024-12-09 05:18:40.705448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:49.100 [2024-12-09 05:18:40.705477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:49.100 [2024-12-09 05:18:40.705491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:49.100 [2024-12-09 05:18:40.705535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.100 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:49.362 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.362 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:49.362 "name": "Existed_Raid", 01:23:49.362 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.362 "strip_size_kb": 0, 01:23:49.362 "state": "configuring", 01:23:49.362 "raid_level": "raid1", 01:23:49.362 "superblock": false, 01:23:49.362 "num_base_bdevs": 3, 01:23:49.362 "num_base_bdevs_discovered": 0, 01:23:49.362 "num_base_bdevs_operational": 3, 01:23:49.362 "base_bdevs_list": [ 01:23:49.362 { 01:23:49.362 "name": "BaseBdev1", 01:23:49.362 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.362 "is_configured": false, 01:23:49.362 "data_offset": 0, 01:23:49.362 "data_size": 0 01:23:49.362 }, 01:23:49.362 { 01:23:49.362 "name": "BaseBdev2", 01:23:49.362 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.362 "is_configured": false, 01:23:49.362 "data_offset": 0, 01:23:49.362 "data_size": 0 01:23:49.362 }, 01:23:49.362 { 01:23:49.362 "name": "BaseBdev3", 01:23:49.362 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.362 "is_configured": false, 01:23:49.362 "data_offset": 0, 01:23:49.362 "data_size": 0 01:23:49.362 } 01:23:49.362 ] 01:23:49.362 }' 01:23:49.362 05:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:49.362 05:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.933 [2024-12-09 05:18:41.265225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:49.933 [2024-12-09 05:18:41.265489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.933 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.933 [2024-12-09 05:18:41.273166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:49.933 [2024-12-09 05:18:41.273385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:49.934 [2024-12-09 05:18:41.273416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:49.934 [2024-12-09 05:18:41.273439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:49.934 [2024-12-09 05:18:41.273452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:49.934 [2024-12-09 05:18:41.273471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.934 [2024-12-09 05:18:41.320671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:49.934 BaseBdev1 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.934 [ 01:23:49.934 { 01:23:49.934 "name": "BaseBdev1", 01:23:49.934 "aliases": [ 01:23:49.934 "aaea6b17-b56d-48c0-adad-61973922d22f" 01:23:49.934 ], 01:23:49.934 "product_name": "Malloc disk", 01:23:49.934 "block_size": 512, 01:23:49.934 "num_blocks": 65536, 01:23:49.934 "uuid": "aaea6b17-b56d-48c0-adad-61973922d22f", 01:23:49.934 "assigned_rate_limits": { 01:23:49.934 "rw_ios_per_sec": 0, 01:23:49.934 "rw_mbytes_per_sec": 0, 01:23:49.934 "r_mbytes_per_sec": 0, 01:23:49.934 "w_mbytes_per_sec": 0 01:23:49.934 }, 01:23:49.934 "claimed": true, 01:23:49.934 "claim_type": "exclusive_write", 01:23:49.934 "zoned": false, 01:23:49.934 "supported_io_types": { 01:23:49.934 "read": true, 01:23:49.934 "write": true, 01:23:49.934 "unmap": true, 01:23:49.934 "flush": true, 01:23:49.934 "reset": true, 01:23:49.934 "nvme_admin": false, 01:23:49.934 "nvme_io": false, 01:23:49.934 "nvme_io_md": false, 01:23:49.934 "write_zeroes": true, 01:23:49.934 "zcopy": true, 01:23:49.934 "get_zone_info": false, 01:23:49.934 "zone_management": false, 01:23:49.934 "zone_append": false, 01:23:49.934 "compare": false, 01:23:49.934 "compare_and_write": false, 01:23:49.934 "abort": true, 01:23:49.934 "seek_hole": false, 01:23:49.934 "seek_data": false, 01:23:49.934 "copy": true, 01:23:49.934 "nvme_iov_md": false 01:23:49.934 }, 01:23:49.934 "memory_domains": [ 01:23:49.934 { 01:23:49.934 "dma_device_id": "system", 01:23:49.934 "dma_device_type": 1 01:23:49.934 }, 01:23:49.934 { 01:23:49.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:49.934 "dma_device_type": 2 01:23:49.934 } 01:23:49.934 ], 01:23:49.934 "driver_specific": {} 01:23:49.934 } 01:23:49.934 ] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:49.934 "name": "Existed_Raid", 01:23:49.934 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.934 "strip_size_kb": 0, 01:23:49.934 "state": "configuring", 01:23:49.934 "raid_level": "raid1", 01:23:49.934 "superblock": false, 01:23:49.934 "num_base_bdevs": 3, 01:23:49.934 "num_base_bdevs_discovered": 1, 01:23:49.934 "num_base_bdevs_operational": 3, 01:23:49.934 "base_bdevs_list": [ 01:23:49.934 { 01:23:49.934 "name": "BaseBdev1", 01:23:49.934 "uuid": "aaea6b17-b56d-48c0-adad-61973922d22f", 01:23:49.934 "is_configured": true, 01:23:49.934 "data_offset": 0, 01:23:49.934 "data_size": 65536 01:23:49.934 }, 01:23:49.934 { 01:23:49.934 "name": "BaseBdev2", 01:23:49.934 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.934 "is_configured": false, 01:23:49.934 "data_offset": 0, 01:23:49.934 "data_size": 0 01:23:49.934 }, 01:23:49.934 { 01:23:49.934 "name": "BaseBdev3", 01:23:49.934 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:49.934 "is_configured": false, 01:23:49.934 "data_offset": 0, 01:23:49.934 "data_size": 0 01:23:49.934 } 01:23:49.934 ] 01:23:49.934 }' 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:49.934 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:50.502 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:50.502 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:50.502 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:50.502 [2024-12-09 05:18:41.884866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:50.503 [2024-12-09 05:18:41.884939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:50.503 [2024-12-09 05:18:41.896945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:50.503 [2024-12-09 05:18:41.899677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:23:50.503 [2024-12-09 05:18:41.899860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:23:50.503 [2024-12-09 05:18:41.899994] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:23:50.503 [2024-12-09 05:18:41.900067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:50.503 "name": "Existed_Raid", 01:23:50.503 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:50.503 "strip_size_kb": 0, 01:23:50.503 "state": "configuring", 01:23:50.503 "raid_level": "raid1", 01:23:50.503 "superblock": false, 01:23:50.503 "num_base_bdevs": 3, 01:23:50.503 "num_base_bdevs_discovered": 1, 01:23:50.503 "num_base_bdevs_operational": 3, 01:23:50.503 "base_bdevs_list": [ 01:23:50.503 { 01:23:50.503 "name": "BaseBdev1", 01:23:50.503 "uuid": "aaea6b17-b56d-48c0-adad-61973922d22f", 01:23:50.503 "is_configured": true, 01:23:50.503 "data_offset": 0, 01:23:50.503 "data_size": 65536 01:23:50.503 }, 01:23:50.503 { 01:23:50.503 "name": "BaseBdev2", 01:23:50.503 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:50.503 "is_configured": false, 01:23:50.503 "data_offset": 0, 01:23:50.503 "data_size": 0 01:23:50.503 }, 01:23:50.503 { 01:23:50.503 "name": "BaseBdev3", 01:23:50.503 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:50.503 "is_configured": false, 01:23:50.503 "data_offset": 0, 01:23:50.503 "data_size": 0 01:23:50.503 } 01:23:50.503 ] 01:23:50.503 }' 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:50.503 05:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.071 [2024-12-09 05:18:42.476789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:51.071 BaseBdev2 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.071 [ 01:23:51.071 { 01:23:51.071 "name": "BaseBdev2", 01:23:51.071 "aliases": [ 01:23:51.071 "01ece995-fce4-4710-bb78-28c8d9a49e42" 01:23:51.071 ], 01:23:51.071 "product_name": "Malloc disk", 01:23:51.071 "block_size": 512, 01:23:51.071 "num_blocks": 65536, 01:23:51.071 "uuid": "01ece995-fce4-4710-bb78-28c8d9a49e42", 01:23:51.071 "assigned_rate_limits": { 01:23:51.071 "rw_ios_per_sec": 0, 01:23:51.071 "rw_mbytes_per_sec": 0, 01:23:51.071 "r_mbytes_per_sec": 0, 01:23:51.071 "w_mbytes_per_sec": 0 01:23:51.071 }, 01:23:51.071 "claimed": true, 01:23:51.071 "claim_type": "exclusive_write", 01:23:51.071 "zoned": false, 01:23:51.071 "supported_io_types": { 01:23:51.071 "read": true, 01:23:51.071 "write": true, 01:23:51.071 "unmap": true, 01:23:51.071 "flush": true, 01:23:51.071 "reset": true, 01:23:51.071 "nvme_admin": false, 01:23:51.071 "nvme_io": false, 01:23:51.071 "nvme_io_md": false, 01:23:51.071 "write_zeroes": true, 01:23:51.071 "zcopy": true, 01:23:51.071 "get_zone_info": false, 01:23:51.071 "zone_management": false, 01:23:51.071 "zone_append": false, 01:23:51.071 "compare": false, 01:23:51.071 "compare_and_write": false, 01:23:51.071 "abort": true, 01:23:51.071 "seek_hole": false, 01:23:51.071 "seek_data": false, 01:23:51.071 "copy": true, 01:23:51.071 "nvme_iov_md": false 01:23:51.071 }, 01:23:51.071 "memory_domains": [ 01:23:51.071 { 01:23:51.071 "dma_device_id": "system", 01:23:51.071 "dma_device_type": 1 01:23:51.071 }, 01:23:51.071 { 01:23:51.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:51.071 "dma_device_type": 2 01:23:51.071 } 01:23:51.071 ], 01:23:51.071 "driver_specific": {} 01:23:51.071 } 01:23:51.071 ] 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:51.071 "name": "Existed_Raid", 01:23:51.071 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:51.071 "strip_size_kb": 0, 01:23:51.071 "state": "configuring", 01:23:51.071 "raid_level": "raid1", 01:23:51.071 "superblock": false, 01:23:51.071 "num_base_bdevs": 3, 01:23:51.071 "num_base_bdevs_discovered": 2, 01:23:51.071 "num_base_bdevs_operational": 3, 01:23:51.071 "base_bdevs_list": [ 01:23:51.071 { 01:23:51.071 "name": "BaseBdev1", 01:23:51.071 "uuid": "aaea6b17-b56d-48c0-adad-61973922d22f", 01:23:51.071 "is_configured": true, 01:23:51.071 "data_offset": 0, 01:23:51.071 "data_size": 65536 01:23:51.071 }, 01:23:51.071 { 01:23:51.071 "name": "BaseBdev2", 01:23:51.071 "uuid": "01ece995-fce4-4710-bb78-28c8d9a49e42", 01:23:51.071 "is_configured": true, 01:23:51.071 "data_offset": 0, 01:23:51.071 "data_size": 65536 01:23:51.071 }, 01:23:51.071 { 01:23:51.071 "name": "BaseBdev3", 01:23:51.071 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:51.071 "is_configured": false, 01:23:51.071 "data_offset": 0, 01:23:51.071 "data_size": 0 01:23:51.071 } 01:23:51.071 ] 01:23:51.071 }' 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:51.071 05:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.638 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:23:51.638 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.638 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.638 [2024-12-09 05:18:43.125090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:51.638 [2024-12-09 05:18:43.125161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:23:51.638 [2024-12-09 05:18:43.125185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:23:51.638 [2024-12-09 05:18:43.125620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:23:51.638 [2024-12-09 05:18:43.125885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:23:51.638 BaseBdev3 01:23:51.638 [2024-12-09 05:18:43.125964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:23:51.639 [2024-12-09 05:18:43.126372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.639 [ 01:23:51.639 { 01:23:51.639 "name": "BaseBdev3", 01:23:51.639 "aliases": [ 01:23:51.639 "99d8e6a6-1727-4671-a1d3-cba088fd5afe" 01:23:51.639 ], 01:23:51.639 "product_name": "Malloc disk", 01:23:51.639 "block_size": 512, 01:23:51.639 "num_blocks": 65536, 01:23:51.639 "uuid": "99d8e6a6-1727-4671-a1d3-cba088fd5afe", 01:23:51.639 "assigned_rate_limits": { 01:23:51.639 "rw_ios_per_sec": 0, 01:23:51.639 "rw_mbytes_per_sec": 0, 01:23:51.639 "r_mbytes_per_sec": 0, 01:23:51.639 "w_mbytes_per_sec": 0 01:23:51.639 }, 01:23:51.639 "claimed": true, 01:23:51.639 "claim_type": "exclusive_write", 01:23:51.639 "zoned": false, 01:23:51.639 "supported_io_types": { 01:23:51.639 "read": true, 01:23:51.639 "write": true, 01:23:51.639 "unmap": true, 01:23:51.639 "flush": true, 01:23:51.639 "reset": true, 01:23:51.639 "nvme_admin": false, 01:23:51.639 "nvme_io": false, 01:23:51.639 "nvme_io_md": false, 01:23:51.639 "write_zeroes": true, 01:23:51.639 "zcopy": true, 01:23:51.639 "get_zone_info": false, 01:23:51.639 "zone_management": false, 01:23:51.639 "zone_append": false, 01:23:51.639 "compare": false, 01:23:51.639 "compare_and_write": false, 01:23:51.639 "abort": true, 01:23:51.639 "seek_hole": false, 01:23:51.639 "seek_data": false, 01:23:51.639 "copy": true, 01:23:51.639 "nvme_iov_md": false 01:23:51.639 }, 01:23:51.639 "memory_domains": [ 01:23:51.639 { 01:23:51.639 "dma_device_id": "system", 01:23:51.639 "dma_device_type": 1 01:23:51.639 }, 01:23:51.639 { 01:23:51.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:51.639 "dma_device_type": 2 01:23:51.639 } 01:23:51.639 ], 01:23:51.639 "driver_specific": {} 01:23:51.639 } 01:23:51.639 ] 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:51.639 "name": "Existed_Raid", 01:23:51.639 "uuid": "4799d0bc-3d0d-4c58-b2a0-3e3ada016850", 01:23:51.639 "strip_size_kb": 0, 01:23:51.639 "state": "online", 01:23:51.639 "raid_level": "raid1", 01:23:51.639 "superblock": false, 01:23:51.639 "num_base_bdevs": 3, 01:23:51.639 "num_base_bdevs_discovered": 3, 01:23:51.639 "num_base_bdevs_operational": 3, 01:23:51.639 "base_bdevs_list": [ 01:23:51.639 { 01:23:51.639 "name": "BaseBdev1", 01:23:51.639 "uuid": "aaea6b17-b56d-48c0-adad-61973922d22f", 01:23:51.639 "is_configured": true, 01:23:51.639 "data_offset": 0, 01:23:51.639 "data_size": 65536 01:23:51.639 }, 01:23:51.639 { 01:23:51.639 "name": "BaseBdev2", 01:23:51.639 "uuid": "01ece995-fce4-4710-bb78-28c8d9a49e42", 01:23:51.639 "is_configured": true, 01:23:51.639 "data_offset": 0, 01:23:51.639 "data_size": 65536 01:23:51.639 }, 01:23:51.639 { 01:23:51.639 "name": "BaseBdev3", 01:23:51.639 "uuid": "99d8e6a6-1727-4671-a1d3-cba088fd5afe", 01:23:51.639 "is_configured": true, 01:23:51.639 "data_offset": 0, 01:23:51.639 "data_size": 65536 01:23:51.639 } 01:23:51.639 ] 01:23:51.639 }' 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:51.639 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:52.224 [2024-12-09 05:18:43.682014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.224 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:52.224 "name": "Existed_Raid", 01:23:52.224 "aliases": [ 01:23:52.224 "4799d0bc-3d0d-4c58-b2a0-3e3ada016850" 01:23:52.224 ], 01:23:52.224 "product_name": "Raid Volume", 01:23:52.224 "block_size": 512, 01:23:52.224 "num_blocks": 65536, 01:23:52.225 "uuid": "4799d0bc-3d0d-4c58-b2a0-3e3ada016850", 01:23:52.225 "assigned_rate_limits": { 01:23:52.225 "rw_ios_per_sec": 0, 01:23:52.225 "rw_mbytes_per_sec": 0, 01:23:52.225 "r_mbytes_per_sec": 0, 01:23:52.225 "w_mbytes_per_sec": 0 01:23:52.225 }, 01:23:52.225 "claimed": false, 01:23:52.225 "zoned": false, 01:23:52.225 "supported_io_types": { 01:23:52.225 "read": true, 01:23:52.225 "write": true, 01:23:52.225 "unmap": false, 01:23:52.225 "flush": false, 01:23:52.225 "reset": true, 01:23:52.225 "nvme_admin": false, 01:23:52.225 "nvme_io": false, 01:23:52.225 "nvme_io_md": false, 01:23:52.225 "write_zeroes": true, 01:23:52.225 "zcopy": false, 01:23:52.225 "get_zone_info": false, 01:23:52.225 "zone_management": false, 01:23:52.225 "zone_append": false, 01:23:52.225 "compare": false, 01:23:52.225 "compare_and_write": false, 01:23:52.225 "abort": false, 01:23:52.225 "seek_hole": false, 01:23:52.225 "seek_data": false, 01:23:52.225 "copy": false, 01:23:52.225 "nvme_iov_md": false 01:23:52.225 }, 01:23:52.225 "memory_domains": [ 01:23:52.225 { 01:23:52.225 "dma_device_id": "system", 01:23:52.225 "dma_device_type": 1 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:52.225 "dma_device_type": 2 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "dma_device_id": "system", 01:23:52.225 "dma_device_type": 1 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:52.225 "dma_device_type": 2 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "dma_device_id": "system", 01:23:52.225 "dma_device_type": 1 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:52.225 "dma_device_type": 2 01:23:52.225 } 01:23:52.225 ], 01:23:52.225 "driver_specific": { 01:23:52.225 "raid": { 01:23:52.225 "uuid": "4799d0bc-3d0d-4c58-b2a0-3e3ada016850", 01:23:52.225 "strip_size_kb": 0, 01:23:52.225 "state": "online", 01:23:52.225 "raid_level": "raid1", 01:23:52.225 "superblock": false, 01:23:52.225 "num_base_bdevs": 3, 01:23:52.225 "num_base_bdevs_discovered": 3, 01:23:52.225 "num_base_bdevs_operational": 3, 01:23:52.225 "base_bdevs_list": [ 01:23:52.225 { 01:23:52.225 "name": "BaseBdev1", 01:23:52.225 "uuid": "aaea6b17-b56d-48c0-adad-61973922d22f", 01:23:52.225 "is_configured": true, 01:23:52.225 "data_offset": 0, 01:23:52.225 "data_size": 65536 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "name": "BaseBdev2", 01:23:52.225 "uuid": "01ece995-fce4-4710-bb78-28c8d9a49e42", 01:23:52.225 "is_configured": true, 01:23:52.225 "data_offset": 0, 01:23:52.225 "data_size": 65536 01:23:52.225 }, 01:23:52.225 { 01:23:52.225 "name": "BaseBdev3", 01:23:52.225 "uuid": "99d8e6a6-1727-4671-a1d3-cba088fd5afe", 01:23:52.225 "is_configured": true, 01:23:52.225 "data_offset": 0, 01:23:52.225 "data_size": 65536 01:23:52.225 } 01:23:52.225 ] 01:23:52.225 } 01:23:52.225 } 01:23:52.225 }' 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:23:52.225 BaseBdev2 01:23:52.225 BaseBdev3' 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:52.225 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.484 05:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.484 [2024-12-09 05:18:43.973492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:52.484 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.742 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:52.742 "name": "Existed_Raid", 01:23:52.742 "uuid": "4799d0bc-3d0d-4c58-b2a0-3e3ada016850", 01:23:52.742 "strip_size_kb": 0, 01:23:52.742 "state": "online", 01:23:52.742 "raid_level": "raid1", 01:23:52.742 "superblock": false, 01:23:52.742 "num_base_bdevs": 3, 01:23:52.742 "num_base_bdevs_discovered": 2, 01:23:52.742 "num_base_bdevs_operational": 2, 01:23:52.742 "base_bdevs_list": [ 01:23:52.742 { 01:23:52.742 "name": null, 01:23:52.742 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:52.742 "is_configured": false, 01:23:52.742 "data_offset": 0, 01:23:52.742 "data_size": 65536 01:23:52.742 }, 01:23:52.742 { 01:23:52.742 "name": "BaseBdev2", 01:23:52.742 "uuid": "01ece995-fce4-4710-bb78-28c8d9a49e42", 01:23:52.742 "is_configured": true, 01:23:52.742 "data_offset": 0, 01:23:52.742 "data_size": 65536 01:23:52.742 }, 01:23:52.742 { 01:23:52.742 "name": "BaseBdev3", 01:23:52.742 "uuid": "99d8e6a6-1727-4671-a1d3-cba088fd5afe", 01:23:52.742 "is_configured": true, 01:23:52.742 "data_offset": 0, 01:23:52.742 "data_size": 65536 01:23:52.742 } 01:23:52.742 ] 01:23:52.742 }' 01:23:52.743 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:52.743 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.001 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.001 [2024-12-09 05:18:44.599368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.259 [2024-12-09 05:18:44.759943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:23:53.259 [2024-12-09 05:18:44.760067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:53.259 [2024-12-09 05:18:44.839567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:53.259 [2024-12-09 05:18:44.839629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:53.259 [2024-12-09 05:18:44.839649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:23:53.259 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 BaseBdev2 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 [ 01:23:53.520 { 01:23:53.520 "name": "BaseBdev2", 01:23:53.520 "aliases": [ 01:23:53.520 "186ae824-283c-4ff4-b0eb-6dc6ac66a588" 01:23:53.520 ], 01:23:53.520 "product_name": "Malloc disk", 01:23:53.520 "block_size": 512, 01:23:53.520 "num_blocks": 65536, 01:23:53.520 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:53.520 "assigned_rate_limits": { 01:23:53.520 "rw_ios_per_sec": 0, 01:23:53.520 "rw_mbytes_per_sec": 0, 01:23:53.520 "r_mbytes_per_sec": 0, 01:23:53.520 "w_mbytes_per_sec": 0 01:23:53.520 }, 01:23:53.520 "claimed": false, 01:23:53.520 "zoned": false, 01:23:53.520 "supported_io_types": { 01:23:53.520 "read": true, 01:23:53.520 "write": true, 01:23:53.520 "unmap": true, 01:23:53.520 "flush": true, 01:23:53.520 "reset": true, 01:23:53.520 "nvme_admin": false, 01:23:53.520 "nvme_io": false, 01:23:53.520 "nvme_io_md": false, 01:23:53.520 "write_zeroes": true, 01:23:53.520 "zcopy": true, 01:23:53.520 "get_zone_info": false, 01:23:53.520 "zone_management": false, 01:23:53.520 "zone_append": false, 01:23:53.520 "compare": false, 01:23:53.520 "compare_and_write": false, 01:23:53.520 "abort": true, 01:23:53.520 "seek_hole": false, 01:23:53.520 "seek_data": false, 01:23:53.520 "copy": true, 01:23:53.520 "nvme_iov_md": false 01:23:53.520 }, 01:23:53.520 "memory_domains": [ 01:23:53.520 { 01:23:53.520 "dma_device_id": "system", 01:23:53.520 "dma_device_type": 1 01:23:53.520 }, 01:23:53.520 { 01:23:53.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:53.520 "dma_device_type": 2 01:23:53.520 } 01:23:53.520 ], 01:23:53.520 "driver_specific": {} 01:23:53.520 } 01:23:53.520 ] 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 BaseBdev3 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 [ 01:23:53.520 { 01:23:53.520 "name": "BaseBdev3", 01:23:53.520 "aliases": [ 01:23:53.520 "61f57c3e-d5d3-4caf-9a48-e82a77914cb8" 01:23:53.520 ], 01:23:53.520 "product_name": "Malloc disk", 01:23:53.520 "block_size": 512, 01:23:53.520 "num_blocks": 65536, 01:23:53.520 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:53.520 "assigned_rate_limits": { 01:23:53.520 "rw_ios_per_sec": 0, 01:23:53.520 "rw_mbytes_per_sec": 0, 01:23:53.520 "r_mbytes_per_sec": 0, 01:23:53.520 "w_mbytes_per_sec": 0 01:23:53.520 }, 01:23:53.520 "claimed": false, 01:23:53.520 "zoned": false, 01:23:53.520 "supported_io_types": { 01:23:53.520 "read": true, 01:23:53.520 "write": true, 01:23:53.520 "unmap": true, 01:23:53.520 "flush": true, 01:23:53.520 "reset": true, 01:23:53.520 "nvme_admin": false, 01:23:53.520 "nvme_io": false, 01:23:53.520 "nvme_io_md": false, 01:23:53.520 "write_zeroes": true, 01:23:53.520 "zcopy": true, 01:23:53.520 "get_zone_info": false, 01:23:53.520 "zone_management": false, 01:23:53.520 "zone_append": false, 01:23:53.520 "compare": false, 01:23:53.520 "compare_and_write": false, 01:23:53.520 "abort": true, 01:23:53.520 "seek_hole": false, 01:23:53.520 "seek_data": false, 01:23:53.520 "copy": true, 01:23:53.520 "nvme_iov_md": false 01:23:53.520 }, 01:23:53.520 "memory_domains": [ 01:23:53.520 { 01:23:53.520 "dma_device_id": "system", 01:23:53.520 "dma_device_type": 1 01:23:53.520 }, 01:23:53.520 { 01:23:53.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:53.520 "dma_device_type": 2 01:23:53.520 } 01:23:53.520 ], 01:23:53.520 "driver_specific": {} 01:23:53.520 } 01:23:53.520 ] 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.520 [2024-12-09 05:18:45.050745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:23:53.520 [2024-12-09 05:18:45.050807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:23:53.520 [2024-12-09 05:18:45.050838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:53.520 [2024-12-09 05:18:45.053900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:53.520 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:53.521 "name": "Existed_Raid", 01:23:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:53.521 "strip_size_kb": 0, 01:23:53.521 "state": "configuring", 01:23:53.521 "raid_level": "raid1", 01:23:53.521 "superblock": false, 01:23:53.521 "num_base_bdevs": 3, 01:23:53.521 "num_base_bdevs_discovered": 2, 01:23:53.521 "num_base_bdevs_operational": 3, 01:23:53.521 "base_bdevs_list": [ 01:23:53.521 { 01:23:53.521 "name": "BaseBdev1", 01:23:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:53.521 "is_configured": false, 01:23:53.521 "data_offset": 0, 01:23:53.521 "data_size": 0 01:23:53.521 }, 01:23:53.521 { 01:23:53.521 "name": "BaseBdev2", 01:23:53.521 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:53.521 "is_configured": true, 01:23:53.521 "data_offset": 0, 01:23:53.521 "data_size": 65536 01:23:53.521 }, 01:23:53.521 { 01:23:53.521 "name": "BaseBdev3", 01:23:53.521 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:53.521 "is_configured": true, 01:23:53.521 "data_offset": 0, 01:23:53.521 "data_size": 65536 01:23:53.521 } 01:23:53.521 ] 01:23:53.521 }' 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:53.521 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.107 [2024-12-09 05:18:45.546904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:54.107 "name": "Existed_Raid", 01:23:54.107 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:54.107 "strip_size_kb": 0, 01:23:54.107 "state": "configuring", 01:23:54.107 "raid_level": "raid1", 01:23:54.107 "superblock": false, 01:23:54.107 "num_base_bdevs": 3, 01:23:54.107 "num_base_bdevs_discovered": 1, 01:23:54.107 "num_base_bdevs_operational": 3, 01:23:54.107 "base_bdevs_list": [ 01:23:54.107 { 01:23:54.107 "name": "BaseBdev1", 01:23:54.107 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:54.107 "is_configured": false, 01:23:54.107 "data_offset": 0, 01:23:54.107 "data_size": 0 01:23:54.107 }, 01:23:54.107 { 01:23:54.107 "name": null, 01:23:54.107 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:54.107 "is_configured": false, 01:23:54.107 "data_offset": 0, 01:23:54.107 "data_size": 65536 01:23:54.107 }, 01:23:54.107 { 01:23:54.107 "name": "BaseBdev3", 01:23:54.107 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:54.107 "is_configured": true, 01:23:54.107 "data_offset": 0, 01:23:54.107 "data_size": 65536 01:23:54.107 } 01:23:54.107 ] 01:23:54.107 }' 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:54.107 05:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.671 [2024-12-09 05:18:46.153745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:23:54.671 BaseBdev1 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.671 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.672 [ 01:23:54.672 { 01:23:54.672 "name": "BaseBdev1", 01:23:54.672 "aliases": [ 01:23:54.672 "e3d9894b-044b-4702-a0fd-24e3df249064" 01:23:54.672 ], 01:23:54.672 "product_name": "Malloc disk", 01:23:54.672 "block_size": 512, 01:23:54.672 "num_blocks": 65536, 01:23:54.672 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:54.672 "assigned_rate_limits": { 01:23:54.672 "rw_ios_per_sec": 0, 01:23:54.672 "rw_mbytes_per_sec": 0, 01:23:54.672 "r_mbytes_per_sec": 0, 01:23:54.672 "w_mbytes_per_sec": 0 01:23:54.672 }, 01:23:54.672 "claimed": true, 01:23:54.672 "claim_type": "exclusive_write", 01:23:54.672 "zoned": false, 01:23:54.672 "supported_io_types": { 01:23:54.672 "read": true, 01:23:54.672 "write": true, 01:23:54.672 "unmap": true, 01:23:54.672 "flush": true, 01:23:54.672 "reset": true, 01:23:54.672 "nvme_admin": false, 01:23:54.672 "nvme_io": false, 01:23:54.672 "nvme_io_md": false, 01:23:54.672 "write_zeroes": true, 01:23:54.672 "zcopy": true, 01:23:54.672 "get_zone_info": false, 01:23:54.672 "zone_management": false, 01:23:54.672 "zone_append": false, 01:23:54.672 "compare": false, 01:23:54.672 "compare_and_write": false, 01:23:54.672 "abort": true, 01:23:54.672 "seek_hole": false, 01:23:54.672 "seek_data": false, 01:23:54.672 "copy": true, 01:23:54.672 "nvme_iov_md": false 01:23:54.672 }, 01:23:54.672 "memory_domains": [ 01:23:54.672 { 01:23:54.672 "dma_device_id": "system", 01:23:54.672 "dma_device_type": 1 01:23:54.672 }, 01:23:54.672 { 01:23:54.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:54.672 "dma_device_type": 2 01:23:54.672 } 01:23:54.672 ], 01:23:54.672 "driver_specific": {} 01:23:54.672 } 01:23:54.672 ] 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:54.672 "name": "Existed_Raid", 01:23:54.672 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:54.672 "strip_size_kb": 0, 01:23:54.672 "state": "configuring", 01:23:54.672 "raid_level": "raid1", 01:23:54.672 "superblock": false, 01:23:54.672 "num_base_bdevs": 3, 01:23:54.672 "num_base_bdevs_discovered": 2, 01:23:54.672 "num_base_bdevs_operational": 3, 01:23:54.672 "base_bdevs_list": [ 01:23:54.672 { 01:23:54.672 "name": "BaseBdev1", 01:23:54.672 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:54.672 "is_configured": true, 01:23:54.672 "data_offset": 0, 01:23:54.672 "data_size": 65536 01:23:54.672 }, 01:23:54.672 { 01:23:54.672 "name": null, 01:23:54.672 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:54.672 "is_configured": false, 01:23:54.672 "data_offset": 0, 01:23:54.672 "data_size": 65536 01:23:54.672 }, 01:23:54.672 { 01:23:54.672 "name": "BaseBdev3", 01:23:54.672 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:54.672 "is_configured": true, 01:23:54.672 "data_offset": 0, 01:23:54.672 "data_size": 65536 01:23:54.672 } 01:23:54.672 ] 01:23:54.672 }' 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:54.672 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.242 [2024-12-09 05:18:46.753967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.242 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:55.242 "name": "Existed_Raid", 01:23:55.242 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:55.242 "strip_size_kb": 0, 01:23:55.242 "state": "configuring", 01:23:55.242 "raid_level": "raid1", 01:23:55.242 "superblock": false, 01:23:55.242 "num_base_bdevs": 3, 01:23:55.242 "num_base_bdevs_discovered": 1, 01:23:55.242 "num_base_bdevs_operational": 3, 01:23:55.242 "base_bdevs_list": [ 01:23:55.242 { 01:23:55.242 "name": "BaseBdev1", 01:23:55.242 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:55.242 "is_configured": true, 01:23:55.242 "data_offset": 0, 01:23:55.242 "data_size": 65536 01:23:55.242 }, 01:23:55.242 { 01:23:55.242 "name": null, 01:23:55.242 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:55.242 "is_configured": false, 01:23:55.242 "data_offset": 0, 01:23:55.242 "data_size": 65536 01:23:55.242 }, 01:23:55.242 { 01:23:55.242 "name": null, 01:23:55.242 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:55.242 "is_configured": false, 01:23:55.242 "data_offset": 0, 01:23:55.242 "data_size": 65536 01:23:55.242 } 01:23:55.242 ] 01:23:55.242 }' 01:23:55.243 05:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:55.243 05:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.807 [2024-12-09 05:18:47.346213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:55.807 "name": "Existed_Raid", 01:23:55.807 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:55.807 "strip_size_kb": 0, 01:23:55.807 "state": "configuring", 01:23:55.807 "raid_level": "raid1", 01:23:55.807 "superblock": false, 01:23:55.807 "num_base_bdevs": 3, 01:23:55.807 "num_base_bdevs_discovered": 2, 01:23:55.807 "num_base_bdevs_operational": 3, 01:23:55.807 "base_bdevs_list": [ 01:23:55.807 { 01:23:55.807 "name": "BaseBdev1", 01:23:55.807 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:55.807 "is_configured": true, 01:23:55.807 "data_offset": 0, 01:23:55.807 "data_size": 65536 01:23:55.807 }, 01:23:55.807 { 01:23:55.807 "name": null, 01:23:55.807 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:55.807 "is_configured": false, 01:23:55.807 "data_offset": 0, 01:23:55.807 "data_size": 65536 01:23:55.807 }, 01:23:55.807 { 01:23:55.807 "name": "BaseBdev3", 01:23:55.807 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:55.807 "is_configured": true, 01:23:55.807 "data_offset": 0, 01:23:55.807 "data_size": 65536 01:23:55.807 } 01:23:55.807 ] 01:23:55.807 }' 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:55.807 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:56.373 05:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:56.373 [2024-12-09 05:18:47.934331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:56.631 "name": "Existed_Raid", 01:23:56.631 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:56.631 "strip_size_kb": 0, 01:23:56.631 "state": "configuring", 01:23:56.631 "raid_level": "raid1", 01:23:56.631 "superblock": false, 01:23:56.631 "num_base_bdevs": 3, 01:23:56.631 "num_base_bdevs_discovered": 1, 01:23:56.631 "num_base_bdevs_operational": 3, 01:23:56.631 "base_bdevs_list": [ 01:23:56.631 { 01:23:56.631 "name": null, 01:23:56.631 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:56.631 "is_configured": false, 01:23:56.631 "data_offset": 0, 01:23:56.631 "data_size": 65536 01:23:56.631 }, 01:23:56.631 { 01:23:56.631 "name": null, 01:23:56.631 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:56.631 "is_configured": false, 01:23:56.631 "data_offset": 0, 01:23:56.631 "data_size": 65536 01:23:56.631 }, 01:23:56.631 { 01:23:56.631 "name": "BaseBdev3", 01:23:56.631 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:56.631 "is_configured": true, 01:23:56.631 "data_offset": 0, 01:23:56.631 "data_size": 65536 01:23:56.631 } 01:23:56.631 ] 01:23:56.631 }' 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:56.631 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.197 [2024-12-09 05:18:48.618123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:57.197 "name": "Existed_Raid", 01:23:57.197 "uuid": "00000000-0000-0000-0000-000000000000", 01:23:57.197 "strip_size_kb": 0, 01:23:57.197 "state": "configuring", 01:23:57.197 "raid_level": "raid1", 01:23:57.197 "superblock": false, 01:23:57.197 "num_base_bdevs": 3, 01:23:57.197 "num_base_bdevs_discovered": 2, 01:23:57.197 "num_base_bdevs_operational": 3, 01:23:57.197 "base_bdevs_list": [ 01:23:57.197 { 01:23:57.197 "name": null, 01:23:57.197 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:57.197 "is_configured": false, 01:23:57.197 "data_offset": 0, 01:23:57.197 "data_size": 65536 01:23:57.197 }, 01:23:57.197 { 01:23:57.197 "name": "BaseBdev2", 01:23:57.197 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:57.197 "is_configured": true, 01:23:57.197 "data_offset": 0, 01:23:57.197 "data_size": 65536 01:23:57.197 }, 01:23:57.197 { 01:23:57.197 "name": "BaseBdev3", 01:23:57.197 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:57.197 "is_configured": true, 01:23:57.197 "data_offset": 0, 01:23:57.197 "data_size": 65536 01:23:57.197 } 01:23:57.197 ] 01:23:57.197 }' 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:57.197 05:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e3d9894b-044b-4702-a0fd-24e3df249064 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.763 [2024-12-09 05:18:49.326168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:23:57.763 [2024-12-09 05:18:49.326664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:23:57.763 [2024-12-09 05:18:49.326690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:23:57.763 [2024-12-09 05:18:49.327036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:23:57.763 [2024-12-09 05:18:49.327244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:23:57.763 [2024-12-09 05:18:49.327266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:23:57.763 [2024-12-09 05:18:49.327621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:23:57.763 NewBaseBdev 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:57.763 [ 01:23:57.763 { 01:23:57.763 "name": "NewBaseBdev", 01:23:57.763 "aliases": [ 01:23:57.763 "e3d9894b-044b-4702-a0fd-24e3df249064" 01:23:57.763 ], 01:23:57.763 "product_name": "Malloc disk", 01:23:57.763 "block_size": 512, 01:23:57.763 "num_blocks": 65536, 01:23:57.763 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:57.763 "assigned_rate_limits": { 01:23:57.763 "rw_ios_per_sec": 0, 01:23:57.763 "rw_mbytes_per_sec": 0, 01:23:57.763 "r_mbytes_per_sec": 0, 01:23:57.763 "w_mbytes_per_sec": 0 01:23:57.763 }, 01:23:57.763 "claimed": true, 01:23:57.763 "claim_type": "exclusive_write", 01:23:57.763 "zoned": false, 01:23:57.763 "supported_io_types": { 01:23:57.763 "read": true, 01:23:57.763 "write": true, 01:23:57.763 "unmap": true, 01:23:57.763 "flush": true, 01:23:57.763 "reset": true, 01:23:57.763 "nvme_admin": false, 01:23:57.763 "nvme_io": false, 01:23:57.763 "nvme_io_md": false, 01:23:57.763 "write_zeroes": true, 01:23:57.763 "zcopy": true, 01:23:57.763 "get_zone_info": false, 01:23:57.763 "zone_management": false, 01:23:57.763 "zone_append": false, 01:23:57.763 "compare": false, 01:23:57.763 "compare_and_write": false, 01:23:57.763 "abort": true, 01:23:57.763 "seek_hole": false, 01:23:57.763 "seek_data": false, 01:23:57.763 "copy": true, 01:23:57.763 "nvme_iov_md": false 01:23:57.763 }, 01:23:57.763 "memory_domains": [ 01:23:57.763 { 01:23:57.763 "dma_device_id": "system", 01:23:57.763 "dma_device_type": 1 01:23:57.763 }, 01:23:57.763 { 01:23:57.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:57.763 "dma_device_type": 2 01:23:57.763 } 01:23:57.763 ], 01:23:57.763 "driver_specific": {} 01:23:57.763 } 01:23:57.763 ] 01:23:57.763 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.764 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.022 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.022 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:23:58.022 "name": "Existed_Raid", 01:23:58.022 "uuid": "650b6fb6-841d-4393-97bb-6f3c0b820175", 01:23:58.022 "strip_size_kb": 0, 01:23:58.022 "state": "online", 01:23:58.022 "raid_level": "raid1", 01:23:58.022 "superblock": false, 01:23:58.022 "num_base_bdevs": 3, 01:23:58.022 "num_base_bdevs_discovered": 3, 01:23:58.022 "num_base_bdevs_operational": 3, 01:23:58.022 "base_bdevs_list": [ 01:23:58.022 { 01:23:58.022 "name": "NewBaseBdev", 01:23:58.022 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:58.022 "is_configured": true, 01:23:58.022 "data_offset": 0, 01:23:58.022 "data_size": 65536 01:23:58.022 }, 01:23:58.022 { 01:23:58.022 "name": "BaseBdev2", 01:23:58.022 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:58.022 "is_configured": true, 01:23:58.022 "data_offset": 0, 01:23:58.022 "data_size": 65536 01:23:58.022 }, 01:23:58.022 { 01:23:58.022 "name": "BaseBdev3", 01:23:58.022 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:58.022 "is_configured": true, 01:23:58.022 "data_offset": 0, 01:23:58.022 "data_size": 65536 01:23:58.022 } 01:23:58.022 ] 01:23:58.022 }' 01:23:58.022 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:23:58.022 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.280 [2024-12-09 05:18:49.854803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:23:58.280 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:23:58.538 "name": "Existed_Raid", 01:23:58.538 "aliases": [ 01:23:58.538 "650b6fb6-841d-4393-97bb-6f3c0b820175" 01:23:58.538 ], 01:23:58.538 "product_name": "Raid Volume", 01:23:58.538 "block_size": 512, 01:23:58.538 "num_blocks": 65536, 01:23:58.538 "uuid": "650b6fb6-841d-4393-97bb-6f3c0b820175", 01:23:58.538 "assigned_rate_limits": { 01:23:58.538 "rw_ios_per_sec": 0, 01:23:58.538 "rw_mbytes_per_sec": 0, 01:23:58.538 "r_mbytes_per_sec": 0, 01:23:58.538 "w_mbytes_per_sec": 0 01:23:58.538 }, 01:23:58.538 "claimed": false, 01:23:58.538 "zoned": false, 01:23:58.538 "supported_io_types": { 01:23:58.538 "read": true, 01:23:58.538 "write": true, 01:23:58.538 "unmap": false, 01:23:58.538 "flush": false, 01:23:58.538 "reset": true, 01:23:58.538 "nvme_admin": false, 01:23:58.538 "nvme_io": false, 01:23:58.538 "nvme_io_md": false, 01:23:58.538 "write_zeroes": true, 01:23:58.538 "zcopy": false, 01:23:58.538 "get_zone_info": false, 01:23:58.538 "zone_management": false, 01:23:58.538 "zone_append": false, 01:23:58.538 "compare": false, 01:23:58.538 "compare_and_write": false, 01:23:58.538 "abort": false, 01:23:58.538 "seek_hole": false, 01:23:58.538 "seek_data": false, 01:23:58.538 "copy": false, 01:23:58.538 "nvme_iov_md": false 01:23:58.538 }, 01:23:58.538 "memory_domains": [ 01:23:58.538 { 01:23:58.538 "dma_device_id": "system", 01:23:58.538 "dma_device_type": 1 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:58.538 "dma_device_type": 2 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "dma_device_id": "system", 01:23:58.538 "dma_device_type": 1 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:58.538 "dma_device_type": 2 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "dma_device_id": "system", 01:23:58.538 "dma_device_type": 1 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:23:58.538 "dma_device_type": 2 01:23:58.538 } 01:23:58.538 ], 01:23:58.538 "driver_specific": { 01:23:58.538 "raid": { 01:23:58.538 "uuid": "650b6fb6-841d-4393-97bb-6f3c0b820175", 01:23:58.538 "strip_size_kb": 0, 01:23:58.538 "state": "online", 01:23:58.538 "raid_level": "raid1", 01:23:58.538 "superblock": false, 01:23:58.538 "num_base_bdevs": 3, 01:23:58.538 "num_base_bdevs_discovered": 3, 01:23:58.538 "num_base_bdevs_operational": 3, 01:23:58.538 "base_bdevs_list": [ 01:23:58.538 { 01:23:58.538 "name": "NewBaseBdev", 01:23:58.538 "uuid": "e3d9894b-044b-4702-a0fd-24e3df249064", 01:23:58.538 "is_configured": true, 01:23:58.538 "data_offset": 0, 01:23:58.538 "data_size": 65536 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "name": "BaseBdev2", 01:23:58.538 "uuid": "186ae824-283c-4ff4-b0eb-6dc6ac66a588", 01:23:58.538 "is_configured": true, 01:23:58.538 "data_offset": 0, 01:23:58.538 "data_size": 65536 01:23:58.538 }, 01:23:58.538 { 01:23:58.538 "name": "BaseBdev3", 01:23:58.538 "uuid": "61f57c3e-d5d3-4caf-9a48-e82a77914cb8", 01:23:58.538 "is_configured": true, 01:23:58.538 "data_offset": 0, 01:23:58.538 "data_size": 65536 01:23:58.538 } 01:23:58.538 ] 01:23:58.538 } 01:23:58.538 } 01:23:58.538 }' 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:23:58.538 BaseBdev2 01:23:58.538 BaseBdev3' 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:58.538 05:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.538 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.539 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:23:58.539 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:23:58.539 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:23:58.539 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:58.539 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:23:58.821 [2024-12-09 05:18:50.154488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:23:58.821 [2024-12-09 05:18:50.154534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:23:58.821 [2024-12-09 05:18:50.154642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:23:58.821 [2024-12-09 05:18:50.155020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:23:58.821 [2024-12-09 05:18:50.155039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67357 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67357 ']' 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67357 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67357 01:23:58.821 killing process with pid 67357 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67357' 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67357 01:23:58.821 [2024-12-09 05:18:50.192418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:23:58.821 05:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67357 01:23:59.079 [2024-12-09 05:18:50.474561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:24:00.456 01:24:00.456 real 0m12.033s 01:24:00.456 user 0m19.733s 01:24:00.456 sys 0m1.654s 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:00.456 ************************************ 01:24:00.456 END TEST raid_state_function_test 01:24:00.456 ************************************ 01:24:00.456 05:18:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 01:24:00.456 05:18:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:24:00.456 05:18:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:00.456 05:18:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:00.456 ************************************ 01:24:00.456 START TEST raid_state_function_test_sb 01:24:00.456 ************************************ 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:24:00.456 Process raid pid: 67996 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67996 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67996' 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67996 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67996 ']' 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:00.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:00.456 05:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:00.456 [2024-12-09 05:18:51.854543] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:00.456 [2024-12-09 05:18:51.854966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:00.456 [2024-12-09 05:18:52.037226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:00.715 [2024-12-09 05:18:52.200216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:00.973 [2024-12-09 05:18:52.427707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:00.973 [2024-12-09 05:18:52.427778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:01.538 [2024-12-09 05:18:52.932307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:01.538 [2024-12-09 05:18:52.932395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:01.538 [2024-12-09 05:18:52.932414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:01.538 [2024-12-09 05:18:52.932432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:01.538 [2024-12-09 05:18:52.932442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:01.538 [2024-12-09 05:18:52.932456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:01.538 "name": "Existed_Raid", 01:24:01.538 "uuid": "c2232682-a13e-4bc4-83d4-b2f6b3c72ac2", 01:24:01.538 "strip_size_kb": 0, 01:24:01.538 "state": "configuring", 01:24:01.538 "raid_level": "raid1", 01:24:01.538 "superblock": true, 01:24:01.538 "num_base_bdevs": 3, 01:24:01.538 "num_base_bdevs_discovered": 0, 01:24:01.538 "num_base_bdevs_operational": 3, 01:24:01.538 "base_bdevs_list": [ 01:24:01.538 { 01:24:01.538 "name": "BaseBdev1", 01:24:01.538 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:01.538 "is_configured": false, 01:24:01.538 "data_offset": 0, 01:24:01.538 "data_size": 0 01:24:01.538 }, 01:24:01.538 { 01:24:01.538 "name": "BaseBdev2", 01:24:01.538 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:01.538 "is_configured": false, 01:24:01.538 "data_offset": 0, 01:24:01.538 "data_size": 0 01:24:01.538 }, 01:24:01.538 { 01:24:01.538 "name": "BaseBdev3", 01:24:01.538 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:01.538 "is_configured": false, 01:24:01.538 "data_offset": 0, 01:24:01.538 "data_size": 0 01:24:01.538 } 01:24:01.538 ] 01:24:01.538 }' 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:01.538 05:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 [2024-12-09 05:18:53.476404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:02.104 [2024-12-09 05:18:53.476451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 [2024-12-09 05:18:53.488341] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:02.104 [2024-12-09 05:18:53.488551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:02.104 [2024-12-09 05:18:53.488669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:02.104 [2024-12-09 05:18:53.488798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:02.104 [2024-12-09 05:18:53.488907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:02.104 [2024-12-09 05:18:53.488964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 [2024-12-09 05:18:53.537514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:02.104 BaseBdev1 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 [ 01:24:02.104 { 01:24:02.104 "name": "BaseBdev1", 01:24:02.104 "aliases": [ 01:24:02.104 "ed52f869-b4c2-40f2-a330-dcf7f64c8f48" 01:24:02.104 ], 01:24:02.104 "product_name": "Malloc disk", 01:24:02.104 "block_size": 512, 01:24:02.104 "num_blocks": 65536, 01:24:02.104 "uuid": "ed52f869-b4c2-40f2-a330-dcf7f64c8f48", 01:24:02.104 "assigned_rate_limits": { 01:24:02.104 "rw_ios_per_sec": 0, 01:24:02.104 "rw_mbytes_per_sec": 0, 01:24:02.104 "r_mbytes_per_sec": 0, 01:24:02.104 "w_mbytes_per_sec": 0 01:24:02.104 }, 01:24:02.104 "claimed": true, 01:24:02.104 "claim_type": "exclusive_write", 01:24:02.104 "zoned": false, 01:24:02.104 "supported_io_types": { 01:24:02.104 "read": true, 01:24:02.104 "write": true, 01:24:02.104 "unmap": true, 01:24:02.104 "flush": true, 01:24:02.104 "reset": true, 01:24:02.104 "nvme_admin": false, 01:24:02.104 "nvme_io": false, 01:24:02.104 "nvme_io_md": false, 01:24:02.104 "write_zeroes": true, 01:24:02.104 "zcopy": true, 01:24:02.104 "get_zone_info": false, 01:24:02.104 "zone_management": false, 01:24:02.104 "zone_append": false, 01:24:02.104 "compare": false, 01:24:02.104 "compare_and_write": false, 01:24:02.104 "abort": true, 01:24:02.104 "seek_hole": false, 01:24:02.104 "seek_data": false, 01:24:02.104 "copy": true, 01:24:02.104 "nvme_iov_md": false 01:24:02.104 }, 01:24:02.104 "memory_domains": [ 01:24:02.104 { 01:24:02.104 "dma_device_id": "system", 01:24:02.104 "dma_device_type": 1 01:24:02.104 }, 01:24:02.104 { 01:24:02.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:02.104 "dma_device_type": 2 01:24:02.104 } 01:24:02.104 ], 01:24:02.104 "driver_specific": {} 01:24:02.104 } 01:24:02.104 ] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.104 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:02.104 "name": "Existed_Raid", 01:24:02.104 "uuid": "69a5c353-1870-4245-be9a-9ea35e8d38e7", 01:24:02.104 "strip_size_kb": 0, 01:24:02.104 "state": "configuring", 01:24:02.104 "raid_level": "raid1", 01:24:02.104 "superblock": true, 01:24:02.104 "num_base_bdevs": 3, 01:24:02.104 "num_base_bdevs_discovered": 1, 01:24:02.104 "num_base_bdevs_operational": 3, 01:24:02.104 "base_bdevs_list": [ 01:24:02.104 { 01:24:02.104 "name": "BaseBdev1", 01:24:02.104 "uuid": "ed52f869-b4c2-40f2-a330-dcf7f64c8f48", 01:24:02.104 "is_configured": true, 01:24:02.104 "data_offset": 2048, 01:24:02.104 "data_size": 63488 01:24:02.104 }, 01:24:02.104 { 01:24:02.104 "name": "BaseBdev2", 01:24:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:02.105 "is_configured": false, 01:24:02.105 "data_offset": 0, 01:24:02.105 "data_size": 0 01:24:02.105 }, 01:24:02.105 { 01:24:02.105 "name": "BaseBdev3", 01:24:02.105 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:02.105 "is_configured": false, 01:24:02.105 "data_offset": 0, 01:24:02.105 "data_size": 0 01:24:02.105 } 01:24:02.105 ] 01:24:02.105 }' 01:24:02.105 05:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:02.105 05:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.669 [2024-12-09 05:18:54.077732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:02.669 [2024-12-09 05:18:54.077939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.669 [2024-12-09 05:18:54.085783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:02.669 [2024-12-09 05:18:54.088239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:02.669 [2024-12-09 05:18:54.088300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:02.669 [2024-12-09 05:18:54.088318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:02.669 [2024-12-09 05:18:54.088333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:02.669 "name": "Existed_Raid", 01:24:02.669 "uuid": "c15186ba-0804-4bbe-9730-04a8ba0f6afd", 01:24:02.669 "strip_size_kb": 0, 01:24:02.669 "state": "configuring", 01:24:02.669 "raid_level": "raid1", 01:24:02.669 "superblock": true, 01:24:02.669 "num_base_bdevs": 3, 01:24:02.669 "num_base_bdevs_discovered": 1, 01:24:02.669 "num_base_bdevs_operational": 3, 01:24:02.669 "base_bdevs_list": [ 01:24:02.669 { 01:24:02.669 "name": "BaseBdev1", 01:24:02.669 "uuid": "ed52f869-b4c2-40f2-a330-dcf7f64c8f48", 01:24:02.669 "is_configured": true, 01:24:02.669 "data_offset": 2048, 01:24:02.669 "data_size": 63488 01:24:02.669 }, 01:24:02.669 { 01:24:02.669 "name": "BaseBdev2", 01:24:02.669 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:02.669 "is_configured": false, 01:24:02.669 "data_offset": 0, 01:24:02.669 "data_size": 0 01:24:02.669 }, 01:24:02.669 { 01:24:02.669 "name": "BaseBdev3", 01:24:02.669 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:02.669 "is_configured": false, 01:24:02.669 "data_offset": 0, 01:24:02.669 "data_size": 0 01:24:02.669 } 01:24:02.669 ] 01:24:02.669 }' 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:02.669 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.234 [2024-12-09 05:18:54.652862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:03.234 BaseBdev2 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.234 [ 01:24:03.234 { 01:24:03.234 "name": "BaseBdev2", 01:24:03.234 "aliases": [ 01:24:03.234 "b7a164d0-1e34-42a0-8ddc-33dbb66beb08" 01:24:03.234 ], 01:24:03.234 "product_name": "Malloc disk", 01:24:03.234 "block_size": 512, 01:24:03.234 "num_blocks": 65536, 01:24:03.234 "uuid": "b7a164d0-1e34-42a0-8ddc-33dbb66beb08", 01:24:03.234 "assigned_rate_limits": { 01:24:03.234 "rw_ios_per_sec": 0, 01:24:03.234 "rw_mbytes_per_sec": 0, 01:24:03.234 "r_mbytes_per_sec": 0, 01:24:03.234 "w_mbytes_per_sec": 0 01:24:03.234 }, 01:24:03.234 "claimed": true, 01:24:03.234 "claim_type": "exclusive_write", 01:24:03.234 "zoned": false, 01:24:03.234 "supported_io_types": { 01:24:03.234 "read": true, 01:24:03.234 "write": true, 01:24:03.234 "unmap": true, 01:24:03.234 "flush": true, 01:24:03.234 "reset": true, 01:24:03.234 "nvme_admin": false, 01:24:03.234 "nvme_io": false, 01:24:03.234 "nvme_io_md": false, 01:24:03.234 "write_zeroes": true, 01:24:03.234 "zcopy": true, 01:24:03.234 "get_zone_info": false, 01:24:03.234 "zone_management": false, 01:24:03.234 "zone_append": false, 01:24:03.234 "compare": false, 01:24:03.234 "compare_and_write": false, 01:24:03.234 "abort": true, 01:24:03.234 "seek_hole": false, 01:24:03.234 "seek_data": false, 01:24:03.234 "copy": true, 01:24:03.234 "nvme_iov_md": false 01:24:03.234 }, 01:24:03.234 "memory_domains": [ 01:24:03.234 { 01:24:03.234 "dma_device_id": "system", 01:24:03.234 "dma_device_type": 1 01:24:03.234 }, 01:24:03.234 { 01:24:03.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:03.234 "dma_device_type": 2 01:24:03.234 } 01:24:03.234 ], 01:24:03.234 "driver_specific": {} 01:24:03.234 } 01:24:03.234 ] 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:03.234 "name": "Existed_Raid", 01:24:03.234 "uuid": "c15186ba-0804-4bbe-9730-04a8ba0f6afd", 01:24:03.234 "strip_size_kb": 0, 01:24:03.234 "state": "configuring", 01:24:03.234 "raid_level": "raid1", 01:24:03.234 "superblock": true, 01:24:03.234 "num_base_bdevs": 3, 01:24:03.234 "num_base_bdevs_discovered": 2, 01:24:03.234 "num_base_bdevs_operational": 3, 01:24:03.234 "base_bdevs_list": [ 01:24:03.234 { 01:24:03.234 "name": "BaseBdev1", 01:24:03.234 "uuid": "ed52f869-b4c2-40f2-a330-dcf7f64c8f48", 01:24:03.234 "is_configured": true, 01:24:03.234 "data_offset": 2048, 01:24:03.234 "data_size": 63488 01:24:03.234 }, 01:24:03.234 { 01:24:03.234 "name": "BaseBdev2", 01:24:03.234 "uuid": "b7a164d0-1e34-42a0-8ddc-33dbb66beb08", 01:24:03.234 "is_configured": true, 01:24:03.234 "data_offset": 2048, 01:24:03.234 "data_size": 63488 01:24:03.234 }, 01:24:03.234 { 01:24:03.234 "name": "BaseBdev3", 01:24:03.234 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:03.234 "is_configured": false, 01:24:03.234 "data_offset": 0, 01:24:03.234 "data_size": 0 01:24:03.234 } 01:24:03.234 ] 01:24:03.234 }' 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:03.234 05:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.799 [2024-12-09 05:18:55.236008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:03.799 [2024-12-09 05:18:55.236311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:24:03.799 [2024-12-09 05:18:55.236341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:03.799 BaseBdev3 01:24:03.799 [2024-12-09 05:18:55.236704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:24:03.799 [2024-12-09 05:18:55.236927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:24:03.799 [2024-12-09 05:18:55.236951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:24:03.799 [2024-12-09 05:18:55.237138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.799 [ 01:24:03.799 { 01:24:03.799 "name": "BaseBdev3", 01:24:03.799 "aliases": [ 01:24:03.799 "e404a0cd-c332-480e-870a-988ad3475ecc" 01:24:03.799 ], 01:24:03.799 "product_name": "Malloc disk", 01:24:03.799 "block_size": 512, 01:24:03.799 "num_blocks": 65536, 01:24:03.799 "uuid": "e404a0cd-c332-480e-870a-988ad3475ecc", 01:24:03.799 "assigned_rate_limits": { 01:24:03.799 "rw_ios_per_sec": 0, 01:24:03.799 "rw_mbytes_per_sec": 0, 01:24:03.799 "r_mbytes_per_sec": 0, 01:24:03.799 "w_mbytes_per_sec": 0 01:24:03.799 }, 01:24:03.799 "claimed": true, 01:24:03.799 "claim_type": "exclusive_write", 01:24:03.799 "zoned": false, 01:24:03.799 "supported_io_types": { 01:24:03.799 "read": true, 01:24:03.799 "write": true, 01:24:03.799 "unmap": true, 01:24:03.799 "flush": true, 01:24:03.799 "reset": true, 01:24:03.799 "nvme_admin": false, 01:24:03.799 "nvme_io": false, 01:24:03.799 "nvme_io_md": false, 01:24:03.799 "write_zeroes": true, 01:24:03.799 "zcopy": true, 01:24:03.799 "get_zone_info": false, 01:24:03.799 "zone_management": false, 01:24:03.799 "zone_append": false, 01:24:03.799 "compare": false, 01:24:03.799 "compare_and_write": false, 01:24:03.799 "abort": true, 01:24:03.799 "seek_hole": false, 01:24:03.799 "seek_data": false, 01:24:03.799 "copy": true, 01:24:03.799 "nvme_iov_md": false 01:24:03.799 }, 01:24:03.799 "memory_domains": [ 01:24:03.799 { 01:24:03.799 "dma_device_id": "system", 01:24:03.799 "dma_device_type": 1 01:24:03.799 }, 01:24:03.799 { 01:24:03.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:03.799 "dma_device_type": 2 01:24:03.799 } 01:24:03.799 ], 01:24:03.799 "driver_specific": {} 01:24:03.799 } 01:24:03.799 ] 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.799 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:03.799 "name": "Existed_Raid", 01:24:03.799 "uuid": "c15186ba-0804-4bbe-9730-04a8ba0f6afd", 01:24:03.799 "strip_size_kb": 0, 01:24:03.799 "state": "online", 01:24:03.800 "raid_level": "raid1", 01:24:03.800 "superblock": true, 01:24:03.800 "num_base_bdevs": 3, 01:24:03.800 "num_base_bdevs_discovered": 3, 01:24:03.800 "num_base_bdevs_operational": 3, 01:24:03.800 "base_bdevs_list": [ 01:24:03.800 { 01:24:03.800 "name": "BaseBdev1", 01:24:03.800 "uuid": "ed52f869-b4c2-40f2-a330-dcf7f64c8f48", 01:24:03.800 "is_configured": true, 01:24:03.800 "data_offset": 2048, 01:24:03.800 "data_size": 63488 01:24:03.800 }, 01:24:03.800 { 01:24:03.800 "name": "BaseBdev2", 01:24:03.800 "uuid": "b7a164d0-1e34-42a0-8ddc-33dbb66beb08", 01:24:03.800 "is_configured": true, 01:24:03.800 "data_offset": 2048, 01:24:03.800 "data_size": 63488 01:24:03.800 }, 01:24:03.800 { 01:24:03.800 "name": "BaseBdev3", 01:24:03.800 "uuid": "e404a0cd-c332-480e-870a-988ad3475ecc", 01:24:03.800 "is_configured": true, 01:24:03.800 "data_offset": 2048, 01:24:03.800 "data_size": 63488 01:24:03.800 } 01:24:03.800 ] 01:24:03.800 }' 01:24:03.800 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:03.800 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:04.413 [2024-12-09 05:18:55.832646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.413 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:04.413 "name": "Existed_Raid", 01:24:04.413 "aliases": [ 01:24:04.413 "c15186ba-0804-4bbe-9730-04a8ba0f6afd" 01:24:04.413 ], 01:24:04.413 "product_name": "Raid Volume", 01:24:04.413 "block_size": 512, 01:24:04.413 "num_blocks": 63488, 01:24:04.413 "uuid": "c15186ba-0804-4bbe-9730-04a8ba0f6afd", 01:24:04.413 "assigned_rate_limits": { 01:24:04.413 "rw_ios_per_sec": 0, 01:24:04.413 "rw_mbytes_per_sec": 0, 01:24:04.413 "r_mbytes_per_sec": 0, 01:24:04.413 "w_mbytes_per_sec": 0 01:24:04.413 }, 01:24:04.413 "claimed": false, 01:24:04.413 "zoned": false, 01:24:04.413 "supported_io_types": { 01:24:04.413 "read": true, 01:24:04.413 "write": true, 01:24:04.413 "unmap": false, 01:24:04.413 "flush": false, 01:24:04.413 "reset": true, 01:24:04.413 "nvme_admin": false, 01:24:04.413 "nvme_io": false, 01:24:04.413 "nvme_io_md": false, 01:24:04.413 "write_zeroes": true, 01:24:04.413 "zcopy": false, 01:24:04.413 "get_zone_info": false, 01:24:04.413 "zone_management": false, 01:24:04.413 "zone_append": false, 01:24:04.413 "compare": false, 01:24:04.413 "compare_and_write": false, 01:24:04.413 "abort": false, 01:24:04.413 "seek_hole": false, 01:24:04.413 "seek_data": false, 01:24:04.413 "copy": false, 01:24:04.413 "nvme_iov_md": false 01:24:04.413 }, 01:24:04.413 "memory_domains": [ 01:24:04.413 { 01:24:04.413 "dma_device_id": "system", 01:24:04.413 "dma_device_type": 1 01:24:04.413 }, 01:24:04.413 { 01:24:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:04.413 "dma_device_type": 2 01:24:04.413 }, 01:24:04.413 { 01:24:04.413 "dma_device_id": "system", 01:24:04.413 "dma_device_type": 1 01:24:04.413 }, 01:24:04.413 { 01:24:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:04.413 "dma_device_type": 2 01:24:04.413 }, 01:24:04.413 { 01:24:04.413 "dma_device_id": "system", 01:24:04.413 "dma_device_type": 1 01:24:04.413 }, 01:24:04.413 { 01:24:04.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:04.413 "dma_device_type": 2 01:24:04.413 } 01:24:04.413 ], 01:24:04.413 "driver_specific": { 01:24:04.413 "raid": { 01:24:04.413 "uuid": "c15186ba-0804-4bbe-9730-04a8ba0f6afd", 01:24:04.413 "strip_size_kb": 0, 01:24:04.413 "state": "online", 01:24:04.414 "raid_level": "raid1", 01:24:04.414 "superblock": true, 01:24:04.414 "num_base_bdevs": 3, 01:24:04.414 "num_base_bdevs_discovered": 3, 01:24:04.414 "num_base_bdevs_operational": 3, 01:24:04.414 "base_bdevs_list": [ 01:24:04.414 { 01:24:04.414 "name": "BaseBdev1", 01:24:04.414 "uuid": "ed52f869-b4c2-40f2-a330-dcf7f64c8f48", 01:24:04.414 "is_configured": true, 01:24:04.414 "data_offset": 2048, 01:24:04.414 "data_size": 63488 01:24:04.414 }, 01:24:04.414 { 01:24:04.414 "name": "BaseBdev2", 01:24:04.414 "uuid": "b7a164d0-1e34-42a0-8ddc-33dbb66beb08", 01:24:04.414 "is_configured": true, 01:24:04.414 "data_offset": 2048, 01:24:04.414 "data_size": 63488 01:24:04.414 }, 01:24:04.414 { 01:24:04.414 "name": "BaseBdev3", 01:24:04.414 "uuid": "e404a0cd-c332-480e-870a-988ad3475ecc", 01:24:04.414 "is_configured": true, 01:24:04.414 "data_offset": 2048, 01:24:04.414 "data_size": 63488 01:24:04.414 } 01:24:04.414 ] 01:24:04.414 } 01:24:04.414 } 01:24:04.414 }' 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:24:04.414 BaseBdev2 01:24:04.414 BaseBdev3' 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:04.414 05:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.414 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.672 [2024-12-09 05:18:56.112404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:04.672 "name": "Existed_Raid", 01:24:04.672 "uuid": "c15186ba-0804-4bbe-9730-04a8ba0f6afd", 01:24:04.672 "strip_size_kb": 0, 01:24:04.672 "state": "online", 01:24:04.672 "raid_level": "raid1", 01:24:04.672 "superblock": true, 01:24:04.672 "num_base_bdevs": 3, 01:24:04.672 "num_base_bdevs_discovered": 2, 01:24:04.672 "num_base_bdevs_operational": 2, 01:24:04.672 "base_bdevs_list": [ 01:24:04.672 { 01:24:04.672 "name": null, 01:24:04.672 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:04.672 "is_configured": false, 01:24:04.672 "data_offset": 0, 01:24:04.672 "data_size": 63488 01:24:04.672 }, 01:24:04.672 { 01:24:04.672 "name": "BaseBdev2", 01:24:04.672 "uuid": "b7a164d0-1e34-42a0-8ddc-33dbb66beb08", 01:24:04.672 "is_configured": true, 01:24:04.672 "data_offset": 2048, 01:24:04.672 "data_size": 63488 01:24:04.672 }, 01:24:04.672 { 01:24:04.672 "name": "BaseBdev3", 01:24:04.672 "uuid": "e404a0cd-c332-480e-870a-988ad3475ecc", 01:24:04.672 "is_configured": true, 01:24:04.672 "data_offset": 2048, 01:24:04.672 "data_size": 63488 01:24:04.672 } 01:24:04.672 ] 01:24:04.672 }' 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:04.672 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.235 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.235 [2024-12-09 05:18:56.792282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.492 05:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.492 [2024-12-09 05:18:56.950422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:24:05.492 [2024-12-09 05:18:56.950632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:05.492 [2024-12-09 05:18:57.052117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:05.492 [2024-12-09 05:18:57.052229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:05.492 [2024-12-09 05:18:57.052250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.492 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 BaseBdev2 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 [ 01:24:05.751 { 01:24:05.751 "name": "BaseBdev2", 01:24:05.751 "aliases": [ 01:24:05.751 "38d8654e-2ddd-4d50-afbe-38a100fdee87" 01:24:05.751 ], 01:24:05.751 "product_name": "Malloc disk", 01:24:05.751 "block_size": 512, 01:24:05.751 "num_blocks": 65536, 01:24:05.751 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:05.751 "assigned_rate_limits": { 01:24:05.751 "rw_ios_per_sec": 0, 01:24:05.751 "rw_mbytes_per_sec": 0, 01:24:05.751 "r_mbytes_per_sec": 0, 01:24:05.751 "w_mbytes_per_sec": 0 01:24:05.751 }, 01:24:05.751 "claimed": false, 01:24:05.751 "zoned": false, 01:24:05.751 "supported_io_types": { 01:24:05.751 "read": true, 01:24:05.751 "write": true, 01:24:05.751 "unmap": true, 01:24:05.751 "flush": true, 01:24:05.751 "reset": true, 01:24:05.751 "nvme_admin": false, 01:24:05.751 "nvme_io": false, 01:24:05.751 "nvme_io_md": false, 01:24:05.751 "write_zeroes": true, 01:24:05.751 "zcopy": true, 01:24:05.751 "get_zone_info": false, 01:24:05.751 "zone_management": false, 01:24:05.751 "zone_append": false, 01:24:05.751 "compare": false, 01:24:05.751 "compare_and_write": false, 01:24:05.751 "abort": true, 01:24:05.751 "seek_hole": false, 01:24:05.751 "seek_data": false, 01:24:05.751 "copy": true, 01:24:05.751 "nvme_iov_md": false 01:24:05.751 }, 01:24:05.751 "memory_domains": [ 01:24:05.751 { 01:24:05.751 "dma_device_id": "system", 01:24:05.751 "dma_device_type": 1 01:24:05.751 }, 01:24:05.751 { 01:24:05.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:05.751 "dma_device_type": 2 01:24:05.751 } 01:24:05.751 ], 01:24:05.751 "driver_specific": {} 01:24:05.751 } 01:24:05.751 ] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 BaseBdev3 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 [ 01:24:05.751 { 01:24:05.751 "name": "BaseBdev3", 01:24:05.751 "aliases": [ 01:24:05.751 "49462ba9-3e00-46c0-9f38-d7a238bd2936" 01:24:05.751 ], 01:24:05.751 "product_name": "Malloc disk", 01:24:05.751 "block_size": 512, 01:24:05.751 "num_blocks": 65536, 01:24:05.751 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:05.751 "assigned_rate_limits": { 01:24:05.751 "rw_ios_per_sec": 0, 01:24:05.751 "rw_mbytes_per_sec": 0, 01:24:05.751 "r_mbytes_per_sec": 0, 01:24:05.751 "w_mbytes_per_sec": 0 01:24:05.751 }, 01:24:05.751 "claimed": false, 01:24:05.751 "zoned": false, 01:24:05.751 "supported_io_types": { 01:24:05.751 "read": true, 01:24:05.751 "write": true, 01:24:05.751 "unmap": true, 01:24:05.751 "flush": true, 01:24:05.751 "reset": true, 01:24:05.751 "nvme_admin": false, 01:24:05.751 "nvme_io": false, 01:24:05.751 "nvme_io_md": false, 01:24:05.751 "write_zeroes": true, 01:24:05.751 "zcopy": true, 01:24:05.751 "get_zone_info": false, 01:24:05.751 "zone_management": false, 01:24:05.751 "zone_append": false, 01:24:05.751 "compare": false, 01:24:05.751 "compare_and_write": false, 01:24:05.751 "abort": true, 01:24:05.751 "seek_hole": false, 01:24:05.751 "seek_data": false, 01:24:05.751 "copy": true, 01:24:05.751 "nvme_iov_md": false 01:24:05.751 }, 01:24:05.751 "memory_domains": [ 01:24:05.751 { 01:24:05.751 "dma_device_id": "system", 01:24:05.751 "dma_device_type": 1 01:24:05.751 }, 01:24:05.751 { 01:24:05.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:05.751 "dma_device_type": 2 01:24:05.751 } 01:24:05.751 ], 01:24:05.751 "driver_specific": {} 01:24:05.751 } 01:24:05.751 ] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 [2024-12-09 05:18:57.280448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:05.751 [2024-12-09 05:18:57.280785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:05.751 [2024-12-09 05:18:57.280827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:05.751 [2024-12-09 05:18:57.283480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:05.751 "name": "Existed_Raid", 01:24:05.751 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:05.751 "strip_size_kb": 0, 01:24:05.751 "state": "configuring", 01:24:05.751 "raid_level": "raid1", 01:24:05.751 "superblock": true, 01:24:05.751 "num_base_bdevs": 3, 01:24:05.751 "num_base_bdevs_discovered": 2, 01:24:05.751 "num_base_bdevs_operational": 3, 01:24:05.751 "base_bdevs_list": [ 01:24:05.751 { 01:24:05.751 "name": "BaseBdev1", 01:24:05.751 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:05.751 "is_configured": false, 01:24:05.751 "data_offset": 0, 01:24:05.751 "data_size": 0 01:24:05.751 }, 01:24:05.751 { 01:24:05.751 "name": "BaseBdev2", 01:24:05.751 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:05.751 "is_configured": true, 01:24:05.751 "data_offset": 2048, 01:24:05.751 "data_size": 63488 01:24:05.751 }, 01:24:05.751 { 01:24:05.751 "name": "BaseBdev3", 01:24:05.751 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:05.751 "is_configured": true, 01:24:05.751 "data_offset": 2048, 01:24:05.751 "data_size": 63488 01:24:05.751 } 01:24:05.751 ] 01:24:05.751 }' 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:05.751 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.317 [2024-12-09 05:18:57.764669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:06.317 "name": "Existed_Raid", 01:24:06.317 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:06.317 "strip_size_kb": 0, 01:24:06.317 "state": "configuring", 01:24:06.317 "raid_level": "raid1", 01:24:06.317 "superblock": true, 01:24:06.317 "num_base_bdevs": 3, 01:24:06.317 "num_base_bdevs_discovered": 1, 01:24:06.317 "num_base_bdevs_operational": 3, 01:24:06.317 "base_bdevs_list": [ 01:24:06.317 { 01:24:06.317 "name": "BaseBdev1", 01:24:06.317 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:06.317 "is_configured": false, 01:24:06.317 "data_offset": 0, 01:24:06.317 "data_size": 0 01:24:06.317 }, 01:24:06.317 { 01:24:06.317 "name": null, 01:24:06.317 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:06.317 "is_configured": false, 01:24:06.317 "data_offset": 0, 01:24:06.317 "data_size": 63488 01:24:06.317 }, 01:24:06.317 { 01:24:06.317 "name": "BaseBdev3", 01:24:06.317 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:06.317 "is_configured": true, 01:24:06.317 "data_offset": 2048, 01:24:06.317 "data_size": 63488 01:24:06.317 } 01:24:06.317 ] 01:24:06.317 }' 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:06.317 05:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.884 [2024-12-09 05:18:58.397363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:06.884 BaseBdev1 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.884 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.884 [ 01:24:06.884 { 01:24:06.884 "name": "BaseBdev1", 01:24:06.885 "aliases": [ 01:24:06.885 "77e74891-3bfa-4ed9-9bcb-ad173f1453f8" 01:24:06.885 ], 01:24:06.885 "product_name": "Malloc disk", 01:24:06.885 "block_size": 512, 01:24:06.885 "num_blocks": 65536, 01:24:06.885 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:06.885 "assigned_rate_limits": { 01:24:06.885 "rw_ios_per_sec": 0, 01:24:06.885 "rw_mbytes_per_sec": 0, 01:24:06.885 "r_mbytes_per_sec": 0, 01:24:06.885 "w_mbytes_per_sec": 0 01:24:06.885 }, 01:24:06.885 "claimed": true, 01:24:06.885 "claim_type": "exclusive_write", 01:24:06.885 "zoned": false, 01:24:06.885 "supported_io_types": { 01:24:06.885 "read": true, 01:24:06.885 "write": true, 01:24:06.885 "unmap": true, 01:24:06.885 "flush": true, 01:24:06.885 "reset": true, 01:24:06.885 "nvme_admin": false, 01:24:06.885 "nvme_io": false, 01:24:06.885 "nvme_io_md": false, 01:24:06.885 "write_zeroes": true, 01:24:06.885 "zcopy": true, 01:24:06.885 "get_zone_info": false, 01:24:06.885 "zone_management": false, 01:24:06.885 "zone_append": false, 01:24:06.885 "compare": false, 01:24:06.885 "compare_and_write": false, 01:24:06.885 "abort": true, 01:24:06.885 "seek_hole": false, 01:24:06.885 "seek_data": false, 01:24:06.885 "copy": true, 01:24:06.885 "nvme_iov_md": false 01:24:06.885 }, 01:24:06.885 "memory_domains": [ 01:24:06.885 { 01:24:06.885 "dma_device_id": "system", 01:24:06.885 "dma_device_type": 1 01:24:06.885 }, 01:24:06.885 { 01:24:06.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:06.885 "dma_device_type": 2 01:24:06.885 } 01:24:06.885 ], 01:24:06.885 "driver_specific": {} 01:24:06.885 } 01:24:06.885 ] 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:06.885 "name": "Existed_Raid", 01:24:06.885 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:06.885 "strip_size_kb": 0, 01:24:06.885 "state": "configuring", 01:24:06.885 "raid_level": "raid1", 01:24:06.885 "superblock": true, 01:24:06.885 "num_base_bdevs": 3, 01:24:06.885 "num_base_bdevs_discovered": 2, 01:24:06.885 "num_base_bdevs_operational": 3, 01:24:06.885 "base_bdevs_list": [ 01:24:06.885 { 01:24:06.885 "name": "BaseBdev1", 01:24:06.885 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:06.885 "is_configured": true, 01:24:06.885 "data_offset": 2048, 01:24:06.885 "data_size": 63488 01:24:06.885 }, 01:24:06.885 { 01:24:06.885 "name": null, 01:24:06.885 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:06.885 "is_configured": false, 01:24:06.885 "data_offset": 0, 01:24:06.885 "data_size": 63488 01:24:06.885 }, 01:24:06.885 { 01:24:06.885 "name": "BaseBdev3", 01:24:06.885 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:06.885 "is_configured": true, 01:24:06.885 "data_offset": 2048, 01:24:06.885 "data_size": 63488 01:24:06.885 } 01:24:06.885 ] 01:24:06.885 }' 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:06.885 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:07.452 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:07.453 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.453 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:07.453 05:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:24:07.453 05:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:07.453 [2024-12-09 05:18:59.021682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:07.453 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:07.711 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:07.711 "name": "Existed_Raid", 01:24:07.711 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:07.711 "strip_size_kb": 0, 01:24:07.711 "state": "configuring", 01:24:07.711 "raid_level": "raid1", 01:24:07.711 "superblock": true, 01:24:07.711 "num_base_bdevs": 3, 01:24:07.711 "num_base_bdevs_discovered": 1, 01:24:07.711 "num_base_bdevs_operational": 3, 01:24:07.711 "base_bdevs_list": [ 01:24:07.711 { 01:24:07.711 "name": "BaseBdev1", 01:24:07.711 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:07.711 "is_configured": true, 01:24:07.711 "data_offset": 2048, 01:24:07.711 "data_size": 63488 01:24:07.711 }, 01:24:07.711 { 01:24:07.711 "name": null, 01:24:07.711 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:07.711 "is_configured": false, 01:24:07.711 "data_offset": 0, 01:24:07.711 "data_size": 63488 01:24:07.711 }, 01:24:07.711 { 01:24:07.711 "name": null, 01:24:07.711 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:07.711 "is_configured": false, 01:24:07.711 "data_offset": 0, 01:24:07.711 "data_size": 63488 01:24:07.711 } 01:24:07.711 ] 01:24:07.711 }' 01:24:07.711 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:07.711 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:07.969 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:07.969 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:24:07.969 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.969 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:07.969 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:08.227 [2024-12-09 05:18:59.593917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:08.227 "name": "Existed_Raid", 01:24:08.227 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:08.227 "strip_size_kb": 0, 01:24:08.227 "state": "configuring", 01:24:08.227 "raid_level": "raid1", 01:24:08.227 "superblock": true, 01:24:08.227 "num_base_bdevs": 3, 01:24:08.227 "num_base_bdevs_discovered": 2, 01:24:08.227 "num_base_bdevs_operational": 3, 01:24:08.227 "base_bdevs_list": [ 01:24:08.227 { 01:24:08.227 "name": "BaseBdev1", 01:24:08.227 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:08.227 "is_configured": true, 01:24:08.227 "data_offset": 2048, 01:24:08.227 "data_size": 63488 01:24:08.227 }, 01:24:08.227 { 01:24:08.227 "name": null, 01:24:08.227 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:08.227 "is_configured": false, 01:24:08.227 "data_offset": 0, 01:24:08.227 "data_size": 63488 01:24:08.227 }, 01:24:08.227 { 01:24:08.227 "name": "BaseBdev3", 01:24:08.227 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:08.227 "is_configured": true, 01:24:08.227 "data_offset": 2048, 01:24:08.227 "data_size": 63488 01:24:08.227 } 01:24:08.227 ] 01:24:08.227 }' 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:08.227 05:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:08.793 [2024-12-09 05:19:00.174236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:08.793 "name": "Existed_Raid", 01:24:08.793 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:08.793 "strip_size_kb": 0, 01:24:08.793 "state": "configuring", 01:24:08.793 "raid_level": "raid1", 01:24:08.793 "superblock": true, 01:24:08.793 "num_base_bdevs": 3, 01:24:08.793 "num_base_bdevs_discovered": 1, 01:24:08.793 "num_base_bdevs_operational": 3, 01:24:08.793 "base_bdevs_list": [ 01:24:08.793 { 01:24:08.793 "name": null, 01:24:08.793 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:08.793 "is_configured": false, 01:24:08.793 "data_offset": 0, 01:24:08.793 "data_size": 63488 01:24:08.793 }, 01:24:08.793 { 01:24:08.793 "name": null, 01:24:08.793 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:08.793 "is_configured": false, 01:24:08.793 "data_offset": 0, 01:24:08.793 "data_size": 63488 01:24:08.793 }, 01:24:08.793 { 01:24:08.793 "name": "BaseBdev3", 01:24:08.793 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:08.793 "is_configured": true, 01:24:08.793 "data_offset": 2048, 01:24:08.793 "data_size": 63488 01:24:08.793 } 01:24:08.793 ] 01:24:08.793 }' 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:08.793 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.360 [2024-12-09 05:19:00.859058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:09.360 "name": "Existed_Raid", 01:24:09.360 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:09.360 "strip_size_kb": 0, 01:24:09.360 "state": "configuring", 01:24:09.360 "raid_level": "raid1", 01:24:09.360 "superblock": true, 01:24:09.360 "num_base_bdevs": 3, 01:24:09.360 "num_base_bdevs_discovered": 2, 01:24:09.360 "num_base_bdevs_operational": 3, 01:24:09.360 "base_bdevs_list": [ 01:24:09.360 { 01:24:09.360 "name": null, 01:24:09.360 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:09.360 "is_configured": false, 01:24:09.360 "data_offset": 0, 01:24:09.360 "data_size": 63488 01:24:09.360 }, 01:24:09.360 { 01:24:09.360 "name": "BaseBdev2", 01:24:09.360 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:09.360 "is_configured": true, 01:24:09.360 "data_offset": 2048, 01:24:09.360 "data_size": 63488 01:24:09.360 }, 01:24:09.360 { 01:24:09.360 "name": "BaseBdev3", 01:24:09.360 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:09.360 "is_configured": true, 01:24:09.360 "data_offset": 2048, 01:24:09.360 "data_size": 63488 01:24:09.360 } 01:24:09.360 ] 01:24:09.360 }' 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:09.360 05:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 77e74891-3bfa-4ed9-9bcb-ad173f1453f8 01:24:09.932 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.933 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.191 [2024-12-09 05:19:01.580426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:24:10.191 [2024-12-09 05:19:01.581036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:24:10.191 [2024-12-09 05:19:01.581063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:10.191 NewBaseBdev 01:24:10.191 [2024-12-09 05:19:01.581488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:24:10.191 [2024-12-09 05:19:01.581720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:24:10.191 [2024-12-09 05:19:01.581743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:24:10.191 [2024-12-09 05:19:01.581919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.191 [ 01:24:10.191 { 01:24:10.191 "name": "NewBaseBdev", 01:24:10.191 "aliases": [ 01:24:10.191 "77e74891-3bfa-4ed9-9bcb-ad173f1453f8" 01:24:10.191 ], 01:24:10.191 "product_name": "Malloc disk", 01:24:10.191 "block_size": 512, 01:24:10.191 "num_blocks": 65536, 01:24:10.191 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:10.191 "assigned_rate_limits": { 01:24:10.191 "rw_ios_per_sec": 0, 01:24:10.191 "rw_mbytes_per_sec": 0, 01:24:10.191 "r_mbytes_per_sec": 0, 01:24:10.191 "w_mbytes_per_sec": 0 01:24:10.191 }, 01:24:10.191 "claimed": true, 01:24:10.191 "claim_type": "exclusive_write", 01:24:10.191 "zoned": false, 01:24:10.191 "supported_io_types": { 01:24:10.191 "read": true, 01:24:10.191 "write": true, 01:24:10.191 "unmap": true, 01:24:10.191 "flush": true, 01:24:10.191 "reset": true, 01:24:10.191 "nvme_admin": false, 01:24:10.191 "nvme_io": false, 01:24:10.191 "nvme_io_md": false, 01:24:10.191 "write_zeroes": true, 01:24:10.191 "zcopy": true, 01:24:10.191 "get_zone_info": false, 01:24:10.191 "zone_management": false, 01:24:10.191 "zone_append": false, 01:24:10.191 "compare": false, 01:24:10.191 "compare_and_write": false, 01:24:10.191 "abort": true, 01:24:10.191 "seek_hole": false, 01:24:10.191 "seek_data": false, 01:24:10.191 "copy": true, 01:24:10.191 "nvme_iov_md": false 01:24:10.191 }, 01:24:10.191 "memory_domains": [ 01:24:10.191 { 01:24:10.191 "dma_device_id": "system", 01:24:10.191 "dma_device_type": 1 01:24:10.191 }, 01:24:10.191 { 01:24:10.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:10.191 "dma_device_type": 2 01:24:10.191 } 01:24:10.191 ], 01:24:10.191 "driver_specific": {} 01:24:10.191 } 01:24:10.191 ] 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:10.191 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:10.192 "name": "Existed_Raid", 01:24:10.192 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:10.192 "strip_size_kb": 0, 01:24:10.192 "state": "online", 01:24:10.192 "raid_level": "raid1", 01:24:10.192 "superblock": true, 01:24:10.192 "num_base_bdevs": 3, 01:24:10.192 "num_base_bdevs_discovered": 3, 01:24:10.192 "num_base_bdevs_operational": 3, 01:24:10.192 "base_bdevs_list": [ 01:24:10.192 { 01:24:10.192 "name": "NewBaseBdev", 01:24:10.192 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:10.192 "is_configured": true, 01:24:10.192 "data_offset": 2048, 01:24:10.192 "data_size": 63488 01:24:10.192 }, 01:24:10.192 { 01:24:10.192 "name": "BaseBdev2", 01:24:10.192 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:10.192 "is_configured": true, 01:24:10.192 "data_offset": 2048, 01:24:10.192 "data_size": 63488 01:24:10.192 }, 01:24:10.192 { 01:24:10.192 "name": "BaseBdev3", 01:24:10.192 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:10.192 "is_configured": true, 01:24:10.192 "data_offset": 2048, 01:24:10.192 "data_size": 63488 01:24:10.192 } 01:24:10.192 ] 01:24:10.192 }' 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:10.192 05:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.758 [2024-12-09 05:19:02.125042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.758 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:10.758 "name": "Existed_Raid", 01:24:10.758 "aliases": [ 01:24:10.758 "8853a8a5-9139-484f-bcb1-d0e768d1b283" 01:24:10.758 ], 01:24:10.758 "product_name": "Raid Volume", 01:24:10.758 "block_size": 512, 01:24:10.758 "num_blocks": 63488, 01:24:10.758 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:10.758 "assigned_rate_limits": { 01:24:10.758 "rw_ios_per_sec": 0, 01:24:10.758 "rw_mbytes_per_sec": 0, 01:24:10.758 "r_mbytes_per_sec": 0, 01:24:10.758 "w_mbytes_per_sec": 0 01:24:10.758 }, 01:24:10.758 "claimed": false, 01:24:10.758 "zoned": false, 01:24:10.758 "supported_io_types": { 01:24:10.758 "read": true, 01:24:10.758 "write": true, 01:24:10.758 "unmap": false, 01:24:10.758 "flush": false, 01:24:10.758 "reset": true, 01:24:10.758 "nvme_admin": false, 01:24:10.758 "nvme_io": false, 01:24:10.759 "nvme_io_md": false, 01:24:10.759 "write_zeroes": true, 01:24:10.759 "zcopy": false, 01:24:10.759 "get_zone_info": false, 01:24:10.759 "zone_management": false, 01:24:10.759 "zone_append": false, 01:24:10.759 "compare": false, 01:24:10.759 "compare_and_write": false, 01:24:10.759 "abort": false, 01:24:10.759 "seek_hole": false, 01:24:10.759 "seek_data": false, 01:24:10.759 "copy": false, 01:24:10.759 "nvme_iov_md": false 01:24:10.759 }, 01:24:10.759 "memory_domains": [ 01:24:10.759 { 01:24:10.759 "dma_device_id": "system", 01:24:10.759 "dma_device_type": 1 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:10.759 "dma_device_type": 2 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "dma_device_id": "system", 01:24:10.759 "dma_device_type": 1 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:10.759 "dma_device_type": 2 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "dma_device_id": "system", 01:24:10.759 "dma_device_type": 1 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:10.759 "dma_device_type": 2 01:24:10.759 } 01:24:10.759 ], 01:24:10.759 "driver_specific": { 01:24:10.759 "raid": { 01:24:10.759 "uuid": "8853a8a5-9139-484f-bcb1-d0e768d1b283", 01:24:10.759 "strip_size_kb": 0, 01:24:10.759 "state": "online", 01:24:10.759 "raid_level": "raid1", 01:24:10.759 "superblock": true, 01:24:10.759 "num_base_bdevs": 3, 01:24:10.759 "num_base_bdevs_discovered": 3, 01:24:10.759 "num_base_bdevs_operational": 3, 01:24:10.759 "base_bdevs_list": [ 01:24:10.759 { 01:24:10.759 "name": "NewBaseBdev", 01:24:10.759 "uuid": "77e74891-3bfa-4ed9-9bcb-ad173f1453f8", 01:24:10.759 "is_configured": true, 01:24:10.759 "data_offset": 2048, 01:24:10.759 "data_size": 63488 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "name": "BaseBdev2", 01:24:10.759 "uuid": "38d8654e-2ddd-4d50-afbe-38a100fdee87", 01:24:10.759 "is_configured": true, 01:24:10.759 "data_offset": 2048, 01:24:10.759 "data_size": 63488 01:24:10.759 }, 01:24:10.759 { 01:24:10.759 "name": "BaseBdev3", 01:24:10.759 "uuid": "49462ba9-3e00-46c0-9f38-d7a238bd2936", 01:24:10.759 "is_configured": true, 01:24:10.759 "data_offset": 2048, 01:24:10.759 "data_size": 63488 01:24:10.759 } 01:24:10.759 ] 01:24:10.759 } 01:24:10.759 } 01:24:10.759 }' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:24:10.759 BaseBdev2 01:24:10.759 BaseBdev3' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:10.759 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:11.017 [2024-12-09 05:19:02.432740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:11.017 [2024-12-09 05:19:02.433117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:11.017 [2024-12-09 05:19:02.433290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:11.017 [2024-12-09 05:19:02.433768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:11.017 [2024-12-09 05:19:02.433788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67996 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67996 ']' 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67996 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67996 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67996' 01:24:11.017 killing process with pid 67996 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67996 01:24:11.017 [2024-12-09 05:19:02.472989] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:24:11.017 05:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67996 01:24:11.276 [2024-12-09 05:19:02.779552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:12.653 05:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:24:12.653 01:24:12.653 real 0m12.143s 01:24:12.653 user 0m19.979s 01:24:12.653 sys 0m1.653s 01:24:12.653 05:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:12.653 05:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:12.653 ************************************ 01:24:12.653 END TEST raid_state_function_test_sb 01:24:12.653 ************************************ 01:24:12.653 05:19:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 01:24:12.653 05:19:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:24:12.653 05:19:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:12.653 05:19:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:12.653 ************************************ 01:24:12.653 START TEST raid_superblock_test 01:24:12.653 ************************************ 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68633 01:24:12.653 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68633 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68633 ']' 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:12.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:12.654 05:19:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:12.654 [2024-12-09 05:19:04.066398] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:12.654 [2024-12-09 05:19:04.066778] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68633 ] 01:24:12.654 [2024-12-09 05:19:04.247151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:12.912 [2024-12-09 05:19:04.365778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:13.169 [2024-12-09 05:19:04.559751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:13.169 [2024-12-09 05:19:04.559796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.426 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.682 malloc1 01:24:13.682 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.682 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:24:13.682 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.682 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.682 [2024-12-09 05:19:05.072496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:24:13.682 [2024-12-09 05:19:05.072774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:13.682 [2024-12-09 05:19:05.072861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:24:13.682 [2024-12-09 05:19:05.073104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:13.682 [2024-12-09 05:19:05.076041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:13.682 [2024-12-09 05:19:05.076280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:24:13.682 pt1 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.683 malloc2 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.683 [2024-12-09 05:19:05.131257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:24:13.683 [2024-12-09 05:19:05.131567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:13.683 [2024-12-09 05:19:05.131662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:24:13.683 [2024-12-09 05:19:05.131796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:13.683 [2024-12-09 05:19:05.134873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:13.683 [2024-12-09 05:19:05.135090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:24:13.683 pt2 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.683 malloc3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.683 [2024-12-09 05:19:05.199688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:24:13.683 [2024-12-09 05:19:05.199986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:13.683 [2024-12-09 05:19:05.200038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:24:13.683 [2024-12-09 05:19:05.200059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:13.683 [2024-12-09 05:19:05.203066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:13.683 [2024-12-09 05:19:05.203291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:24:13.683 pt3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.683 [2024-12-09 05:19:05.211978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:24:13.683 [2024-12-09 05:19:05.214563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:24:13.683 [2024-12-09 05:19:05.214666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:24:13.683 [2024-12-09 05:19:05.214886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:24:13.683 [2024-12-09 05:19:05.214932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:13.683 [2024-12-09 05:19:05.215212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:24:13.683 [2024-12-09 05:19:05.215516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:24:13.683 [2024-12-09 05:19:05.215537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:24:13.683 [2024-12-09 05:19:05.215709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:13.683 "name": "raid_bdev1", 01:24:13.683 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:13.683 "strip_size_kb": 0, 01:24:13.683 "state": "online", 01:24:13.683 "raid_level": "raid1", 01:24:13.683 "superblock": true, 01:24:13.683 "num_base_bdevs": 3, 01:24:13.683 "num_base_bdevs_discovered": 3, 01:24:13.683 "num_base_bdevs_operational": 3, 01:24:13.683 "base_bdevs_list": [ 01:24:13.683 { 01:24:13.683 "name": "pt1", 01:24:13.683 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:13.683 "is_configured": true, 01:24:13.683 "data_offset": 2048, 01:24:13.683 "data_size": 63488 01:24:13.683 }, 01:24:13.683 { 01:24:13.683 "name": "pt2", 01:24:13.683 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:13.683 "is_configured": true, 01:24:13.683 "data_offset": 2048, 01:24:13.683 "data_size": 63488 01:24:13.683 }, 01:24:13.683 { 01:24:13.683 "name": "pt3", 01:24:13.683 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:13.683 "is_configured": true, 01:24:13.683 "data_offset": 2048, 01:24:13.683 "data_size": 63488 01:24:13.683 } 01:24:13.683 ] 01:24:13.683 }' 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:13.683 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.254 [2024-12-09 05:19:05.712600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.254 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:14.254 "name": "raid_bdev1", 01:24:14.254 "aliases": [ 01:24:14.254 "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc" 01:24:14.254 ], 01:24:14.254 "product_name": "Raid Volume", 01:24:14.254 "block_size": 512, 01:24:14.254 "num_blocks": 63488, 01:24:14.254 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:14.254 "assigned_rate_limits": { 01:24:14.254 "rw_ios_per_sec": 0, 01:24:14.254 "rw_mbytes_per_sec": 0, 01:24:14.254 "r_mbytes_per_sec": 0, 01:24:14.254 "w_mbytes_per_sec": 0 01:24:14.254 }, 01:24:14.254 "claimed": false, 01:24:14.254 "zoned": false, 01:24:14.254 "supported_io_types": { 01:24:14.254 "read": true, 01:24:14.254 "write": true, 01:24:14.254 "unmap": false, 01:24:14.254 "flush": false, 01:24:14.254 "reset": true, 01:24:14.254 "nvme_admin": false, 01:24:14.254 "nvme_io": false, 01:24:14.254 "nvme_io_md": false, 01:24:14.254 "write_zeroes": true, 01:24:14.254 "zcopy": false, 01:24:14.254 "get_zone_info": false, 01:24:14.254 "zone_management": false, 01:24:14.254 "zone_append": false, 01:24:14.254 "compare": false, 01:24:14.254 "compare_and_write": false, 01:24:14.254 "abort": false, 01:24:14.254 "seek_hole": false, 01:24:14.254 "seek_data": false, 01:24:14.254 "copy": false, 01:24:14.254 "nvme_iov_md": false 01:24:14.254 }, 01:24:14.254 "memory_domains": [ 01:24:14.254 { 01:24:14.254 "dma_device_id": "system", 01:24:14.254 "dma_device_type": 1 01:24:14.254 }, 01:24:14.254 { 01:24:14.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:14.254 "dma_device_type": 2 01:24:14.254 }, 01:24:14.254 { 01:24:14.254 "dma_device_id": "system", 01:24:14.254 "dma_device_type": 1 01:24:14.254 }, 01:24:14.254 { 01:24:14.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:14.255 "dma_device_type": 2 01:24:14.255 }, 01:24:14.255 { 01:24:14.255 "dma_device_id": "system", 01:24:14.255 "dma_device_type": 1 01:24:14.255 }, 01:24:14.255 { 01:24:14.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:14.255 "dma_device_type": 2 01:24:14.255 } 01:24:14.255 ], 01:24:14.255 "driver_specific": { 01:24:14.255 "raid": { 01:24:14.255 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:14.255 "strip_size_kb": 0, 01:24:14.255 "state": "online", 01:24:14.255 "raid_level": "raid1", 01:24:14.255 "superblock": true, 01:24:14.255 "num_base_bdevs": 3, 01:24:14.255 "num_base_bdevs_discovered": 3, 01:24:14.255 "num_base_bdevs_operational": 3, 01:24:14.255 "base_bdevs_list": [ 01:24:14.255 { 01:24:14.255 "name": "pt1", 01:24:14.255 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:14.255 "is_configured": true, 01:24:14.255 "data_offset": 2048, 01:24:14.255 "data_size": 63488 01:24:14.255 }, 01:24:14.255 { 01:24:14.255 "name": "pt2", 01:24:14.255 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:14.255 "is_configured": true, 01:24:14.255 "data_offset": 2048, 01:24:14.255 "data_size": 63488 01:24:14.255 }, 01:24:14.255 { 01:24:14.255 "name": "pt3", 01:24:14.255 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:14.255 "is_configured": true, 01:24:14.255 "data_offset": 2048, 01:24:14.255 "data_size": 63488 01:24:14.255 } 01:24:14.255 ] 01:24:14.255 } 01:24:14.255 } 01:24:14.255 }' 01:24:14.255 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:14.255 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:24:14.255 pt2 01:24:14.255 pt3' 01:24:14.255 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:14.255 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:14.255 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.513 05:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.513 [2024-12-09 05:19:06.036671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc ']' 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.513 [2024-12-09 05:19:06.088231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:14.513 [2024-12-09 05:19:06.088265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:14.513 [2024-12-09 05:19:06.088413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:14.513 [2024-12-09 05:19:06.088552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:14.513 [2024-12-09 05:19:06.088573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.513 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.771 [2024-12-09 05:19:06.236407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:24:14.771 [2024-12-09 05:19:06.239198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:24:14.771 [2024-12-09 05:19:06.239286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:24:14.771 [2024-12-09 05:19:06.239419] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:24:14.771 [2024-12-09 05:19:06.239545] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:24:14.771 [2024-12-09 05:19:06.239590] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:24:14.771 [2024-12-09 05:19:06.239626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:14.771 [2024-12-09 05:19:06.239644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:24:14.771 request: 01:24:14.771 { 01:24:14.771 "name": "raid_bdev1", 01:24:14.771 "raid_level": "raid1", 01:24:14.771 "base_bdevs": [ 01:24:14.771 "malloc1", 01:24:14.771 "malloc2", 01:24:14.771 "malloc3" 01:24:14.771 ], 01:24:14.771 "superblock": false, 01:24:14.771 "method": "bdev_raid_create", 01:24:14.771 "req_id": 1 01:24:14.771 } 01:24:14.771 Got JSON-RPC error response 01:24:14.771 response: 01:24:14.771 { 01:24:14.771 "code": -17, 01:24:14.771 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:24:14.771 } 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:24:14.771 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.772 [2024-12-09 05:19:06.308443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:24:14.772 [2024-12-09 05:19:06.308685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:14.772 [2024-12-09 05:19:06.308774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:24:14.772 [2024-12-09 05:19:06.308967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:14.772 [2024-12-09 05:19:06.312231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:14.772 [2024-12-09 05:19:06.312460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:24:14.772 [2024-12-09 05:19:06.312701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:24:14.772 [2024-12-09 05:19:06.312929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:24:14.772 pt1 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:14.772 "name": "raid_bdev1", 01:24:14.772 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:14.772 "strip_size_kb": 0, 01:24:14.772 "state": "configuring", 01:24:14.772 "raid_level": "raid1", 01:24:14.772 "superblock": true, 01:24:14.772 "num_base_bdevs": 3, 01:24:14.772 "num_base_bdevs_discovered": 1, 01:24:14.772 "num_base_bdevs_operational": 3, 01:24:14.772 "base_bdevs_list": [ 01:24:14.772 { 01:24:14.772 "name": "pt1", 01:24:14.772 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:14.772 "is_configured": true, 01:24:14.772 "data_offset": 2048, 01:24:14.772 "data_size": 63488 01:24:14.772 }, 01:24:14.772 { 01:24:14.772 "name": null, 01:24:14.772 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:14.772 "is_configured": false, 01:24:14.772 "data_offset": 2048, 01:24:14.772 "data_size": 63488 01:24:14.772 }, 01:24:14.772 { 01:24:14.772 "name": null, 01:24:14.772 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:14.772 "is_configured": false, 01:24:14.772 "data_offset": 2048, 01:24:14.772 "data_size": 63488 01:24:14.772 } 01:24:14.772 ] 01:24:14.772 }' 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:14.772 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.337 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.338 [2024-12-09 05:19:06.861167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:24:15.338 [2024-12-09 05:19:06.861260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:15.338 [2024-12-09 05:19:06.861328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 01:24:15.338 [2024-12-09 05:19:06.861347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:15.338 [2024-12-09 05:19:06.862261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:15.338 [2024-12-09 05:19:06.862314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:24:15.338 [2024-12-09 05:19:06.862461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:24:15.338 [2024-12-09 05:19:06.862503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:24:15.338 pt2 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.338 [2024-12-09 05:19:06.869105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:15.338 "name": "raid_bdev1", 01:24:15.338 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:15.338 "strip_size_kb": 0, 01:24:15.338 "state": "configuring", 01:24:15.338 "raid_level": "raid1", 01:24:15.338 "superblock": true, 01:24:15.338 "num_base_bdevs": 3, 01:24:15.338 "num_base_bdevs_discovered": 1, 01:24:15.338 "num_base_bdevs_operational": 3, 01:24:15.338 "base_bdevs_list": [ 01:24:15.338 { 01:24:15.338 "name": "pt1", 01:24:15.338 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:15.338 "is_configured": true, 01:24:15.338 "data_offset": 2048, 01:24:15.338 "data_size": 63488 01:24:15.338 }, 01:24:15.338 { 01:24:15.338 "name": null, 01:24:15.338 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:15.338 "is_configured": false, 01:24:15.338 "data_offset": 0, 01:24:15.338 "data_size": 63488 01:24:15.338 }, 01:24:15.338 { 01:24:15.338 "name": null, 01:24:15.338 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:15.338 "is_configured": false, 01:24:15.338 "data_offset": 2048, 01:24:15.338 "data_size": 63488 01:24:15.338 } 01:24:15.338 ] 01:24:15.338 }' 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:15.338 05:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.902 [2024-12-09 05:19:07.405331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:24:15.902 [2024-12-09 05:19:07.405482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:15.902 [2024-12-09 05:19:07.405555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 01:24:15.902 [2024-12-09 05:19:07.405585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:15.902 [2024-12-09 05:19:07.406252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:15.902 [2024-12-09 05:19:07.406288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:24:15.902 [2024-12-09 05:19:07.406427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:24:15.902 [2024-12-09 05:19:07.406490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:24:15.902 pt2 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.902 [2024-12-09 05:19:07.413280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:24:15.902 [2024-12-09 05:19:07.413578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:15.902 [2024-12-09 05:19:07.413750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:24:15.902 [2024-12-09 05:19:07.413901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:15.902 [2024-12-09 05:19:07.414638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:15.902 [2024-12-09 05:19:07.414819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:24:15.902 [2024-12-09 05:19:07.415098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:24:15.902 [2024-12-09 05:19:07.415296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:24:15.902 [2024-12-09 05:19:07.415641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:24:15.902 [2024-12-09 05:19:07.415805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:15.902 [2024-12-09 05:19:07.416261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:24:15.902 [2024-12-09 05:19:07.416832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:24:15.902 [2024-12-09 05:19:07.417092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:24:15.902 [2024-12-09 05:19:07.417566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:15.902 pt3 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:15.902 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:15.903 "name": "raid_bdev1", 01:24:15.903 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:15.903 "strip_size_kb": 0, 01:24:15.903 "state": "online", 01:24:15.903 "raid_level": "raid1", 01:24:15.903 "superblock": true, 01:24:15.903 "num_base_bdevs": 3, 01:24:15.903 "num_base_bdevs_discovered": 3, 01:24:15.903 "num_base_bdevs_operational": 3, 01:24:15.903 "base_bdevs_list": [ 01:24:15.903 { 01:24:15.903 "name": "pt1", 01:24:15.903 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:15.903 "is_configured": true, 01:24:15.903 "data_offset": 2048, 01:24:15.903 "data_size": 63488 01:24:15.903 }, 01:24:15.903 { 01:24:15.903 "name": "pt2", 01:24:15.903 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:15.903 "is_configured": true, 01:24:15.903 "data_offset": 2048, 01:24:15.903 "data_size": 63488 01:24:15.903 }, 01:24:15.903 { 01:24:15.903 "name": "pt3", 01:24:15.903 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:15.903 "is_configured": true, 01:24:15.903 "data_offset": 2048, 01:24:15.903 "data_size": 63488 01:24:15.903 } 01:24:15.903 ] 01:24:15.903 }' 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:15.903 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.468 [2024-12-09 05:19:07.934167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:16.468 "name": "raid_bdev1", 01:24:16.468 "aliases": [ 01:24:16.468 "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc" 01:24:16.468 ], 01:24:16.468 "product_name": "Raid Volume", 01:24:16.468 "block_size": 512, 01:24:16.468 "num_blocks": 63488, 01:24:16.468 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:16.468 "assigned_rate_limits": { 01:24:16.468 "rw_ios_per_sec": 0, 01:24:16.468 "rw_mbytes_per_sec": 0, 01:24:16.468 "r_mbytes_per_sec": 0, 01:24:16.468 "w_mbytes_per_sec": 0 01:24:16.468 }, 01:24:16.468 "claimed": false, 01:24:16.468 "zoned": false, 01:24:16.468 "supported_io_types": { 01:24:16.468 "read": true, 01:24:16.468 "write": true, 01:24:16.468 "unmap": false, 01:24:16.468 "flush": false, 01:24:16.468 "reset": true, 01:24:16.468 "nvme_admin": false, 01:24:16.468 "nvme_io": false, 01:24:16.468 "nvme_io_md": false, 01:24:16.468 "write_zeroes": true, 01:24:16.468 "zcopy": false, 01:24:16.468 "get_zone_info": false, 01:24:16.468 "zone_management": false, 01:24:16.468 "zone_append": false, 01:24:16.468 "compare": false, 01:24:16.468 "compare_and_write": false, 01:24:16.468 "abort": false, 01:24:16.468 "seek_hole": false, 01:24:16.468 "seek_data": false, 01:24:16.468 "copy": false, 01:24:16.468 "nvme_iov_md": false 01:24:16.468 }, 01:24:16.468 "memory_domains": [ 01:24:16.468 { 01:24:16.468 "dma_device_id": "system", 01:24:16.468 "dma_device_type": 1 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:16.468 "dma_device_type": 2 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "dma_device_id": "system", 01:24:16.468 "dma_device_type": 1 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:16.468 "dma_device_type": 2 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "dma_device_id": "system", 01:24:16.468 "dma_device_type": 1 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:16.468 "dma_device_type": 2 01:24:16.468 } 01:24:16.468 ], 01:24:16.468 "driver_specific": { 01:24:16.468 "raid": { 01:24:16.468 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:16.468 "strip_size_kb": 0, 01:24:16.468 "state": "online", 01:24:16.468 "raid_level": "raid1", 01:24:16.468 "superblock": true, 01:24:16.468 "num_base_bdevs": 3, 01:24:16.468 "num_base_bdevs_discovered": 3, 01:24:16.468 "num_base_bdevs_operational": 3, 01:24:16.468 "base_bdevs_list": [ 01:24:16.468 { 01:24:16.468 "name": "pt1", 01:24:16.468 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:16.468 "is_configured": true, 01:24:16.468 "data_offset": 2048, 01:24:16.468 "data_size": 63488 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "name": "pt2", 01:24:16.468 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:16.468 "is_configured": true, 01:24:16.468 "data_offset": 2048, 01:24:16.468 "data_size": 63488 01:24:16.468 }, 01:24:16.468 { 01:24:16.468 "name": "pt3", 01:24:16.468 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:16.468 "is_configured": true, 01:24:16.468 "data_offset": 2048, 01:24:16.468 "data_size": 63488 01:24:16.468 } 01:24:16.468 ] 01:24:16.468 } 01:24:16.468 } 01:24:16.468 }' 01:24:16.468 05:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:16.469 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:24:16.469 pt2 01:24:16.469 pt3' 01:24:16.469 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:24:16.727 [2024-12-09 05:19:08.254193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc '!=' 4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc ']' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.727 [2024-12-09 05:19:08.306006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:16.727 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.985 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:16.985 "name": "raid_bdev1", 01:24:16.986 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:16.986 "strip_size_kb": 0, 01:24:16.986 "state": "online", 01:24:16.986 "raid_level": "raid1", 01:24:16.986 "superblock": true, 01:24:16.986 "num_base_bdevs": 3, 01:24:16.986 "num_base_bdevs_discovered": 2, 01:24:16.986 "num_base_bdevs_operational": 2, 01:24:16.986 "base_bdevs_list": [ 01:24:16.986 { 01:24:16.986 "name": null, 01:24:16.986 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:16.986 "is_configured": false, 01:24:16.986 "data_offset": 0, 01:24:16.986 "data_size": 63488 01:24:16.986 }, 01:24:16.986 { 01:24:16.986 "name": "pt2", 01:24:16.986 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:16.986 "is_configured": true, 01:24:16.986 "data_offset": 2048, 01:24:16.986 "data_size": 63488 01:24:16.986 }, 01:24:16.986 { 01:24:16.986 "name": "pt3", 01:24:16.986 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:16.986 "is_configured": true, 01:24:16.986 "data_offset": 2048, 01:24:16.986 "data_size": 63488 01:24:16.986 } 01:24:16.986 ] 01:24:16.986 }' 01:24:16.986 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:16.986 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.244 [2024-12-09 05:19:08.810105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:17.244 [2024-12-09 05:19:08.810142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:17.244 [2024-12-09 05:19:08.810244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:17.244 [2024-12-09 05:19:08.810327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:17.244 [2024-12-09 05:19:08.810351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.244 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.502 [2024-12-09 05:19:08.890079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:24:17.502 [2024-12-09 05:19:08.890167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:17.502 [2024-12-09 05:19:08.890196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 01:24:17.502 [2024-12-09 05:19:08.890215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:17.502 [2024-12-09 05:19:08.893735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:17.502 [2024-12-09 05:19:08.893795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:24:17.502 [2024-12-09 05:19:08.893960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:24:17.502 [2024-12-09 05:19:08.894041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:24:17.502 pt2 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:17.502 "name": "raid_bdev1", 01:24:17.502 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:17.502 "strip_size_kb": 0, 01:24:17.502 "state": "configuring", 01:24:17.502 "raid_level": "raid1", 01:24:17.502 "superblock": true, 01:24:17.502 "num_base_bdevs": 3, 01:24:17.502 "num_base_bdevs_discovered": 1, 01:24:17.502 "num_base_bdevs_operational": 2, 01:24:17.502 "base_bdevs_list": [ 01:24:17.502 { 01:24:17.502 "name": null, 01:24:17.502 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:17.502 "is_configured": false, 01:24:17.502 "data_offset": 2048, 01:24:17.502 "data_size": 63488 01:24:17.502 }, 01:24:17.502 { 01:24:17.502 "name": "pt2", 01:24:17.502 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:17.502 "is_configured": true, 01:24:17.502 "data_offset": 2048, 01:24:17.502 "data_size": 63488 01:24:17.502 }, 01:24:17.502 { 01:24:17.502 "name": null, 01:24:17.502 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:17.502 "is_configured": false, 01:24:17.502 "data_offset": 2048, 01:24:17.502 "data_size": 63488 01:24:17.502 } 01:24:17.502 ] 01:24:17.502 }' 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:17.502 05:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.068 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 01:24:18.068 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:24:18.068 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 01:24:18.068 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:24:18.068 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.068 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.068 [2024-12-09 05:19:09.414447] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:24:18.069 [2024-12-09 05:19:09.414628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:18.069 [2024-12-09 05:19:09.414667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:24:18.069 [2024-12-09 05:19:09.414706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:18.069 [2024-12-09 05:19:09.415415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:18.069 [2024-12-09 05:19:09.415478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:24:18.069 [2024-12-09 05:19:09.415644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:24:18.069 [2024-12-09 05:19:09.415696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:24:18.069 [2024-12-09 05:19:09.415901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:24:18.069 [2024-12-09 05:19:09.415925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:18.069 [2024-12-09 05:19:09.416267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:24:18.069 [2024-12-09 05:19:09.416561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:24:18.069 [2024-12-09 05:19:09.416581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:24:18.069 [2024-12-09 05:19:09.416835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:18.069 pt3 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:18.069 "name": "raid_bdev1", 01:24:18.069 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:18.069 "strip_size_kb": 0, 01:24:18.069 "state": "online", 01:24:18.069 "raid_level": "raid1", 01:24:18.069 "superblock": true, 01:24:18.069 "num_base_bdevs": 3, 01:24:18.069 "num_base_bdevs_discovered": 2, 01:24:18.069 "num_base_bdevs_operational": 2, 01:24:18.069 "base_bdevs_list": [ 01:24:18.069 { 01:24:18.069 "name": null, 01:24:18.069 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:18.069 "is_configured": false, 01:24:18.069 "data_offset": 2048, 01:24:18.069 "data_size": 63488 01:24:18.069 }, 01:24:18.069 { 01:24:18.069 "name": "pt2", 01:24:18.069 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:18.069 "is_configured": true, 01:24:18.069 "data_offset": 2048, 01:24:18.069 "data_size": 63488 01:24:18.069 }, 01:24:18.069 { 01:24:18.069 "name": "pt3", 01:24:18.069 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:18.069 "is_configured": true, 01:24:18.069 "data_offset": 2048, 01:24:18.069 "data_size": 63488 01:24:18.069 } 01:24:18.069 ] 01:24:18.069 }' 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:18.069 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.327 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:24:18.327 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.327 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.586 [2024-12-09 05:19:09.946684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:18.586 [2024-12-09 05:19:09.946745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:18.586 [2024-12-09 05:19:09.946916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:18.586 [2024-12-09 05:19:09.947010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:18.586 [2024-12-09 05:19:09.947029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:24:18.586 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.586 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:18.586 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.586 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.586 05:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:24:18.586 05:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.586 [2024-12-09 05:19:10.018705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:24:18.586 [2024-12-09 05:19:10.018823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:18.586 [2024-12-09 05:19:10.018905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:24:18.586 [2024-12-09 05:19:10.018925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:18.586 [2024-12-09 05:19:10.022083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:18.586 [2024-12-09 05:19:10.022132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:24:18.586 [2024-12-09 05:19:10.022261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:24:18.586 [2024-12-09 05:19:10.022329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:24:18.586 [2024-12-09 05:19:10.022584] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:24:18.586 [2024-12-09 05:19:10.022606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:18.586 [2024-12-09 05:19:10.022635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:24:18.586 [2024-12-09 05:19:10.022715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:24:18.586 pt1 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.586 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:18.586 "name": "raid_bdev1", 01:24:18.586 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:18.586 "strip_size_kb": 0, 01:24:18.586 "state": "configuring", 01:24:18.586 "raid_level": "raid1", 01:24:18.586 "superblock": true, 01:24:18.586 "num_base_bdevs": 3, 01:24:18.586 "num_base_bdevs_discovered": 1, 01:24:18.586 "num_base_bdevs_operational": 2, 01:24:18.586 "base_bdevs_list": [ 01:24:18.586 { 01:24:18.586 "name": null, 01:24:18.586 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:18.586 "is_configured": false, 01:24:18.586 "data_offset": 2048, 01:24:18.586 "data_size": 63488 01:24:18.586 }, 01:24:18.586 { 01:24:18.586 "name": "pt2", 01:24:18.587 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:18.587 "is_configured": true, 01:24:18.587 "data_offset": 2048, 01:24:18.587 "data_size": 63488 01:24:18.587 }, 01:24:18.587 { 01:24:18.587 "name": null, 01:24:18.587 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:18.587 "is_configured": false, 01:24:18.587 "data_offset": 2048, 01:24:18.587 "data_size": 63488 01:24:18.587 } 01:24:18.587 ] 01:24:18.587 }' 01:24:18.587 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:18.587 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.154 [2024-12-09 05:19:10.619168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:24:19.154 [2024-12-09 05:19:10.619458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:19.154 [2024-12-09 05:19:10.619652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 01:24:19.154 [2024-12-09 05:19:10.619685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:19.154 [2024-12-09 05:19:10.620381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:19.154 [2024-12-09 05:19:10.620457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:24:19.154 [2024-12-09 05:19:10.620600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:24:19.154 [2024-12-09 05:19:10.620641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:24:19.154 [2024-12-09 05:19:10.620816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:24:19.154 [2024-12-09 05:19:10.620843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:19.154 [2024-12-09 05:19:10.621215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:24:19.154 [2024-12-09 05:19:10.621475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:24:19.154 [2024-12-09 05:19:10.621532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:24:19.154 [2024-12-09 05:19:10.621729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:19.154 pt3 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:19.154 "name": "raid_bdev1", 01:24:19.154 "uuid": "4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc", 01:24:19.154 "strip_size_kb": 0, 01:24:19.154 "state": "online", 01:24:19.154 "raid_level": "raid1", 01:24:19.154 "superblock": true, 01:24:19.154 "num_base_bdevs": 3, 01:24:19.154 "num_base_bdevs_discovered": 2, 01:24:19.154 "num_base_bdevs_operational": 2, 01:24:19.154 "base_bdevs_list": [ 01:24:19.154 { 01:24:19.154 "name": null, 01:24:19.154 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:19.154 "is_configured": false, 01:24:19.154 "data_offset": 2048, 01:24:19.154 "data_size": 63488 01:24:19.154 }, 01:24:19.154 { 01:24:19.154 "name": "pt2", 01:24:19.154 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:19.154 "is_configured": true, 01:24:19.154 "data_offset": 2048, 01:24:19.154 "data_size": 63488 01:24:19.154 }, 01:24:19.154 { 01:24:19.154 "name": "pt3", 01:24:19.154 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:19.154 "is_configured": true, 01:24:19.154 "data_offset": 2048, 01:24:19.154 "data_size": 63488 01:24:19.154 } 01:24:19.154 ] 01:24:19.154 }' 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:19.154 05:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:19.723 [2024-12-09 05:19:11.215775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc '!=' 4f19c0e5-24a0-4e45-9f4a-68e7ffd6d3cc ']' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68633 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68633 ']' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68633 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68633 01:24:19.723 killing process with pid 68633 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68633' 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68633 01:24:19.723 [2024-12-09 05:19:11.292027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:24:19.723 05:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68633 01:24:19.723 [2024-12-09 05:19:11.292137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:19.723 [2024-12-09 05:19:11.292221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:19.723 [2024-12-09 05:19:11.292242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:24:19.983 [2024-12-09 05:19:11.557309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:21.359 05:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:24:21.359 01:24:21.359 real 0m8.820s 01:24:21.359 user 0m14.332s 01:24:21.359 sys 0m1.201s 01:24:21.359 05:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:21.359 05:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:21.359 ************************************ 01:24:21.359 END TEST raid_superblock_test 01:24:21.359 ************************************ 01:24:21.359 05:19:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 01:24:21.359 05:19:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:24:21.359 05:19:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:21.359 05:19:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:21.359 ************************************ 01:24:21.359 START TEST raid_read_error_test 01:24:21.359 ************************************ 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vjd745qYMZ 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69094 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69094 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69094 ']' 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:21.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:21.359 05:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:21.359 [2024-12-09 05:19:12.963200] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:21.359 [2024-12-09 05:19:12.963428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69094 ] 01:24:21.618 [2024-12-09 05:19:13.151837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:21.875 [2024-12-09 05:19:13.295149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:22.134 [2024-12-09 05:19:13.510959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:22.134 [2024-12-09 05:19:13.511011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:22.391 05:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:22.391 05:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:24:22.391 05:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:24:22.391 05:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:24:22.391 05:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.391 05:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.391 BaseBdev1_malloc 01:24:22.391 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.391 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:24:22.391 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.391 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.649 true 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.649 [2024-12-09 05:19:14.019383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:24:22.649 [2024-12-09 05:19:14.019497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:22.649 [2024-12-09 05:19:14.019527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:24:22.649 [2024-12-09 05:19:14.019545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:22.649 [2024-12-09 05:19:14.022348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:22.649 [2024-12-09 05:19:14.022455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:24:22.649 BaseBdev1 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.649 BaseBdev2_malloc 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.649 true 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:24:22.649 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.650 [2024-12-09 05:19:14.074511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:24:22.650 [2024-12-09 05:19:14.074610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:22.650 [2024-12-09 05:19:14.074635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:24:22.650 [2024-12-09 05:19:14.074652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:22.650 [2024-12-09 05:19:14.077645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:22.650 [2024-12-09 05:19:14.077693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:24:22.650 BaseBdev2 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.650 BaseBdev3_malloc 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.650 true 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.650 [2024-12-09 05:19:14.143924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:24:22.650 [2024-12-09 05:19:14.143988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:22.650 [2024-12-09 05:19:14.144014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:24:22.650 [2024-12-09 05:19:14.144031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:22.650 [2024-12-09 05:19:14.147769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:22.650 [2024-12-09 05:19:14.147892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:24:22.650 BaseBdev3 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.650 [2024-12-09 05:19:14.156267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:22.650 [2024-12-09 05:19:14.159161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:22.650 [2024-12-09 05:19:14.159306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:22.650 [2024-12-09 05:19:14.159604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:24:22.650 [2024-12-09 05:19:14.159633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:22.650 [2024-12-09 05:19:14.159960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 01:24:22.650 [2024-12-09 05:19:14.160215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:24:22.650 [2024-12-09 05:19:14.160244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:24:22.650 [2024-12-09 05:19:14.160518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:22.650 "name": "raid_bdev1", 01:24:22.650 "uuid": "8764f17f-91e4-4a9e-8cf3-cee9cc903a83", 01:24:22.650 "strip_size_kb": 0, 01:24:22.650 "state": "online", 01:24:22.650 "raid_level": "raid1", 01:24:22.650 "superblock": true, 01:24:22.650 "num_base_bdevs": 3, 01:24:22.650 "num_base_bdevs_discovered": 3, 01:24:22.650 "num_base_bdevs_operational": 3, 01:24:22.650 "base_bdevs_list": [ 01:24:22.650 { 01:24:22.650 "name": "BaseBdev1", 01:24:22.650 "uuid": "0c4b2be9-f243-58c3-a905-7e797d32ac87", 01:24:22.650 "is_configured": true, 01:24:22.650 "data_offset": 2048, 01:24:22.650 "data_size": 63488 01:24:22.650 }, 01:24:22.650 { 01:24:22.650 "name": "BaseBdev2", 01:24:22.650 "uuid": "329527a8-0d72-58d3-8b1f-962f00dbc6b0", 01:24:22.650 "is_configured": true, 01:24:22.650 "data_offset": 2048, 01:24:22.650 "data_size": 63488 01:24:22.650 }, 01:24:22.650 { 01:24:22.650 "name": "BaseBdev3", 01:24:22.650 "uuid": "4162b626-dffe-58d2-860a-deb2bfe3d182", 01:24:22.650 "is_configured": true, 01:24:22.650 "data_offset": 2048, 01:24:22.650 "data_size": 63488 01:24:22.650 } 01:24:22.650 ] 01:24:22.650 }' 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:22.650 05:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:23.216 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:24:23.216 05:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:24:23.216 [2024-12-09 05:19:14.790117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:24.146 "name": "raid_bdev1", 01:24:24.146 "uuid": "8764f17f-91e4-4a9e-8cf3-cee9cc903a83", 01:24:24.146 "strip_size_kb": 0, 01:24:24.146 "state": "online", 01:24:24.146 "raid_level": "raid1", 01:24:24.146 "superblock": true, 01:24:24.146 "num_base_bdevs": 3, 01:24:24.146 "num_base_bdevs_discovered": 3, 01:24:24.146 "num_base_bdevs_operational": 3, 01:24:24.146 "base_bdevs_list": [ 01:24:24.146 { 01:24:24.146 "name": "BaseBdev1", 01:24:24.146 "uuid": "0c4b2be9-f243-58c3-a905-7e797d32ac87", 01:24:24.146 "is_configured": true, 01:24:24.146 "data_offset": 2048, 01:24:24.146 "data_size": 63488 01:24:24.146 }, 01:24:24.146 { 01:24:24.146 "name": "BaseBdev2", 01:24:24.146 "uuid": "329527a8-0d72-58d3-8b1f-962f00dbc6b0", 01:24:24.146 "is_configured": true, 01:24:24.146 "data_offset": 2048, 01:24:24.146 "data_size": 63488 01:24:24.146 }, 01:24:24.146 { 01:24:24.146 "name": "BaseBdev3", 01:24:24.146 "uuid": "4162b626-dffe-58d2-860a-deb2bfe3d182", 01:24:24.146 "is_configured": true, 01:24:24.146 "data_offset": 2048, 01:24:24.146 "data_size": 63488 01:24:24.146 } 01:24:24.146 ] 01:24:24.146 }' 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:24.146 05:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:24.709 05:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:24:24.709 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.709 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:24.709 [2024-12-09 05:19:16.245429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:24.709 [2024-12-09 05:19:16.245480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:24.709 [2024-12-09 05:19:16.249237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:24.710 [2024-12-09 05:19:16.249441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:24.710 [2024-12-09 05:19:16.249597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:24.710 [2024-12-09 05:19:16.249614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:24:24.710 { 01:24:24.710 "results": [ 01:24:24.710 { 01:24:24.710 "job": "raid_bdev1", 01:24:24.710 "core_mask": "0x1", 01:24:24.710 "workload": "randrw", 01:24:24.710 "percentage": 50, 01:24:24.710 "status": "finished", 01:24:24.710 "queue_depth": 1, 01:24:24.710 "io_size": 131072, 01:24:24.710 "runtime": 1.452938, 01:24:24.710 "iops": 8968.724061178109, 01:24:24.710 "mibps": 1121.0905076472636, 01:24:24.710 "io_failed": 0, 01:24:24.710 "io_timeout": 0, 01:24:24.710 "avg_latency_us": 107.01916576555207, 01:24:24.710 "min_latency_us": 37.93454545454546, 01:24:24.710 "max_latency_us": 2010.7636363636364 01:24:24.710 } 01:24:24.710 ], 01:24:24.710 "core_count": 1 01:24:24.710 } 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69094 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69094 ']' 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69094 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69094 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:24.710 killing process with pid 69094 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69094' 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69094 01:24:24.710 [2024-12-09 05:19:16.285121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:24:24.710 05:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69094 01:24:24.968 [2024-12-09 05:19:16.489003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vjd745qYMZ 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 01:24:26.343 01:24:26.343 real 0m4.904s 01:24:26.343 user 0m6.054s 01:24:26.343 sys 0m0.596s 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:26.343 ************************************ 01:24:26.343 05:19:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:26.343 END TEST raid_read_error_test 01:24:26.343 ************************************ 01:24:26.343 05:19:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 01:24:26.343 05:19:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:24:26.343 05:19:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:26.343 05:19:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:26.343 ************************************ 01:24:26.343 START TEST raid_write_error_test 01:24:26.343 ************************************ 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c7PVHIn9Bi 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69242 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69242 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69242 ']' 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:26.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:26.343 05:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:26.343 [2024-12-09 05:19:17.942937] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:26.343 [2024-12-09 05:19:17.943124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69242 ] 01:24:26.602 [2024-12-09 05:19:18.131889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:26.860 [2024-12-09 05:19:18.291416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:27.118 [2024-12-09 05:19:18.506420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:27.118 [2024-12-09 05:19:18.506499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:27.377 05:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:27.377 05:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:24:27.377 05:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:24:27.377 05:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:24:27.377 05:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.377 05:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.636 BaseBdev1_malloc 01:24:27.637 05:19:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 true 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 [2024-12-09 05:19:19.016183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:24:27.637 [2024-12-09 05:19:19.016258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:27.637 [2024-12-09 05:19:19.016292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:24:27.637 [2024-12-09 05:19:19.016314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:27.637 [2024-12-09 05:19:19.019607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:27.637 [2024-12-09 05:19:19.019679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:24:27.637 BaseBdev1 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 BaseBdev2_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 true 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 [2024-12-09 05:19:19.075715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:24:27.637 [2024-12-09 05:19:19.075793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:27.637 [2024-12-09 05:19:19.075823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:24:27.637 [2024-12-09 05:19:19.075844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:27.637 [2024-12-09 05:19:19.078884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:27.637 [2024-12-09 05:19:19.078943] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:24:27.637 BaseBdev2 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 BaseBdev3_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 true 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 [2024-12-09 05:19:19.160117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:24:27.637 [2024-12-09 05:19:19.160205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:27.637 [2024-12-09 05:19:19.160236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:24:27.637 [2024-12-09 05:19:19.160258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:27.637 [2024-12-09 05:19:19.163790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:27.637 [2024-12-09 05:19:19.163864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:24:27.637 BaseBdev3 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 [2024-12-09 05:19:19.168242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:27.637 [2024-12-09 05:19:19.170886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:27.637 [2024-12-09 05:19:19.171053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:27.637 [2024-12-09 05:19:19.171403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:24:27.637 [2024-12-09 05:19:19.171436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:24:27.637 [2024-12-09 05:19:19.171781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 01:24:27.637 [2024-12-09 05:19:19.172051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:24:27.637 [2024-12-09 05:19:19.172091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:24:27.637 [2024-12-09 05:19:19.172342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:27.637 "name": "raid_bdev1", 01:24:27.637 "uuid": "6042759c-66e3-4406-a7aa-f66074cc8793", 01:24:27.637 "strip_size_kb": 0, 01:24:27.637 "state": "online", 01:24:27.637 "raid_level": "raid1", 01:24:27.637 "superblock": true, 01:24:27.637 "num_base_bdevs": 3, 01:24:27.637 "num_base_bdevs_discovered": 3, 01:24:27.637 "num_base_bdevs_operational": 3, 01:24:27.637 "base_bdevs_list": [ 01:24:27.637 { 01:24:27.637 "name": "BaseBdev1", 01:24:27.637 "uuid": "fc12e8af-e4fc-5bbb-9f45-fff61e146fb5", 01:24:27.637 "is_configured": true, 01:24:27.637 "data_offset": 2048, 01:24:27.637 "data_size": 63488 01:24:27.637 }, 01:24:27.637 { 01:24:27.637 "name": "BaseBdev2", 01:24:27.637 "uuid": "92533f11-88f5-58d4-a2c1-7fa620ef7489", 01:24:27.637 "is_configured": true, 01:24:27.637 "data_offset": 2048, 01:24:27.637 "data_size": 63488 01:24:27.637 }, 01:24:27.637 { 01:24:27.637 "name": "BaseBdev3", 01:24:27.637 "uuid": "77609e6d-9355-5201-b6d9-b1f826140948", 01:24:27.637 "is_configured": true, 01:24:27.637 "data_offset": 2048, 01:24:27.637 "data_size": 63488 01:24:27.637 } 01:24:27.637 ] 01:24:27.637 }' 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:27.637 05:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:28.204 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:24:28.204 05:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:24:28.466 [2024-12-09 05:19:19.862054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:29.408 [2024-12-09 05:19:20.732503] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 01:24:29.408 [2024-12-09 05:19:20.732602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:29.408 [2024-12-09 05:19:20.732964] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:29.408 "name": "raid_bdev1", 01:24:29.408 "uuid": "6042759c-66e3-4406-a7aa-f66074cc8793", 01:24:29.408 "strip_size_kb": 0, 01:24:29.408 "state": "online", 01:24:29.408 "raid_level": "raid1", 01:24:29.408 "superblock": true, 01:24:29.408 "num_base_bdevs": 3, 01:24:29.408 "num_base_bdevs_discovered": 2, 01:24:29.408 "num_base_bdevs_operational": 2, 01:24:29.408 "base_bdevs_list": [ 01:24:29.408 { 01:24:29.408 "name": null, 01:24:29.408 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:29.408 "is_configured": false, 01:24:29.408 "data_offset": 0, 01:24:29.408 "data_size": 63488 01:24:29.408 }, 01:24:29.408 { 01:24:29.408 "name": "BaseBdev2", 01:24:29.408 "uuid": "92533f11-88f5-58d4-a2c1-7fa620ef7489", 01:24:29.408 "is_configured": true, 01:24:29.408 "data_offset": 2048, 01:24:29.408 "data_size": 63488 01:24:29.408 }, 01:24:29.408 { 01:24:29.408 "name": "BaseBdev3", 01:24:29.408 "uuid": "77609e6d-9355-5201-b6d9-b1f826140948", 01:24:29.408 "is_configured": true, 01:24:29.408 "data_offset": 2048, 01:24:29.408 "data_size": 63488 01:24:29.408 } 01:24:29.408 ] 01:24:29.408 }' 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:29.408 05:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:29.974 05:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:24:29.974 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.974 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:29.974 [2024-12-09 05:19:21.309927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:29.974 [2024-12-09 05:19:21.309977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:29.974 [2024-12-09 05:19:21.313637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:29.974 [2024-12-09 05:19:21.313724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:29.975 [2024-12-09 05:19:21.313919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:29.975 [2024-12-09 05:19:21.313952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:24:29.975 { 01:24:29.975 "results": [ 01:24:29.975 { 01:24:29.975 "job": "raid_bdev1", 01:24:29.975 "core_mask": "0x1", 01:24:29.975 "workload": "randrw", 01:24:29.975 "percentage": 50, 01:24:29.975 "status": "finished", 01:24:29.975 "queue_depth": 1, 01:24:29.975 "io_size": 131072, 01:24:29.975 "runtime": 1.445285, 01:24:29.975 "iops": 9076.410534946395, 01:24:29.975 "mibps": 1134.5513168682994, 01:24:29.975 "io_failed": 0, 01:24:29.975 "io_timeout": 0, 01:24:29.975 "avg_latency_us": 104.74654839290912, 01:24:29.975 "min_latency_us": 40.261818181818185, 01:24:29.975 "max_latency_us": 2174.6036363636363 01:24:29.975 } 01:24:29.975 ], 01:24:29.975 "core_count": 1 01:24:29.975 } 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69242 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69242 ']' 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69242 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69242 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:29.975 killing process with pid 69242 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69242' 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69242 01:24:29.975 [2024-12-09 05:19:21.353517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:24:29.975 05:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69242 01:24:29.975 [2024-12-09 05:19:21.574993] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c7PVHIn9Bi 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 01:24:31.347 01:24:31.347 real 0m5.084s 01:24:31.347 user 0m6.264s 01:24:31.347 sys 0m0.669s 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:31.347 05:19:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:24:31.347 ************************************ 01:24:31.347 END TEST raid_write_error_test 01:24:31.347 ************************************ 01:24:31.347 05:19:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 01:24:31.347 05:19:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:24:31.347 05:19:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 01:24:31.347 05:19:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:24:31.347 05:19:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:31.347 05:19:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:31.347 ************************************ 01:24:31.347 START TEST raid_state_function_test 01:24:31.347 ************************************ 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:31.347 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69391 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:24:31.348 Process raid pid: 69391 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69391' 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69391 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69391 ']' 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:31.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:31.348 05:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:31.606 [2024-12-09 05:19:23.049108] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:31.606 [2024-12-09 05:19:23.049304] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:31.874 [2024-12-09 05:19:23.237939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:31.874 [2024-12-09 05:19:23.392688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:32.131 [2024-12-09 05:19:23.612224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:32.131 [2024-12-09 05:19:23.612314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:32.696 [2024-12-09 05:19:24.058817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:32.696 [2024-12-09 05:19:24.058928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:32.696 [2024-12-09 05:19:24.058965] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:32.696 [2024-12-09 05:19:24.058984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:32.696 [2024-12-09 05:19:24.058997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:32.696 [2024-12-09 05:19:24.059014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:32.696 [2024-12-09 05:19:24.059043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:24:32.696 [2024-12-09 05:19:24.059062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:32.696 "name": "Existed_Raid", 01:24:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:32.696 "strip_size_kb": 64, 01:24:32.696 "state": "configuring", 01:24:32.696 "raid_level": "raid0", 01:24:32.696 "superblock": false, 01:24:32.696 "num_base_bdevs": 4, 01:24:32.696 "num_base_bdevs_discovered": 0, 01:24:32.696 "num_base_bdevs_operational": 4, 01:24:32.696 "base_bdevs_list": [ 01:24:32.696 { 01:24:32.696 "name": "BaseBdev1", 01:24:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:32.696 "is_configured": false, 01:24:32.696 "data_offset": 0, 01:24:32.696 "data_size": 0 01:24:32.696 }, 01:24:32.696 { 01:24:32.696 "name": "BaseBdev2", 01:24:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:32.696 "is_configured": false, 01:24:32.696 "data_offset": 0, 01:24:32.696 "data_size": 0 01:24:32.696 }, 01:24:32.696 { 01:24:32.696 "name": "BaseBdev3", 01:24:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:32.696 "is_configured": false, 01:24:32.696 "data_offset": 0, 01:24:32.696 "data_size": 0 01:24:32.696 }, 01:24:32.696 { 01:24:32.696 "name": "BaseBdev4", 01:24:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:32.696 "is_configured": false, 01:24:32.696 "data_offset": 0, 01:24:32.696 "data_size": 0 01:24:32.696 } 01:24:32.696 ] 01:24:32.696 }' 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:32.696 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 [2024-12-09 05:19:24.602923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:33.261 [2024-12-09 05:19:24.603011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 [2024-12-09 05:19:24.610917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:33.261 [2024-12-09 05:19:24.611028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:33.261 [2024-12-09 05:19:24.611064] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:33.261 [2024-12-09 05:19:24.611084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:33.261 [2024-12-09 05:19:24.611097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:33.261 [2024-12-09 05:19:24.611115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:33.261 [2024-12-09 05:19:24.611127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:24:33.261 [2024-12-09 05:19:24.611144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 [2024-12-09 05:19:24.660608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:33.261 BaseBdev1 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 [ 01:24:33.261 { 01:24:33.261 "name": "BaseBdev1", 01:24:33.261 "aliases": [ 01:24:33.261 "71795d47-0967-4c31-aac3-4660319a90ce" 01:24:33.261 ], 01:24:33.261 "product_name": "Malloc disk", 01:24:33.261 "block_size": 512, 01:24:33.261 "num_blocks": 65536, 01:24:33.261 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:33.261 "assigned_rate_limits": { 01:24:33.261 "rw_ios_per_sec": 0, 01:24:33.261 "rw_mbytes_per_sec": 0, 01:24:33.261 "r_mbytes_per_sec": 0, 01:24:33.261 "w_mbytes_per_sec": 0 01:24:33.261 }, 01:24:33.261 "claimed": true, 01:24:33.261 "claim_type": "exclusive_write", 01:24:33.261 "zoned": false, 01:24:33.261 "supported_io_types": { 01:24:33.261 "read": true, 01:24:33.261 "write": true, 01:24:33.261 "unmap": true, 01:24:33.261 "flush": true, 01:24:33.261 "reset": true, 01:24:33.261 "nvme_admin": false, 01:24:33.261 "nvme_io": false, 01:24:33.261 "nvme_io_md": false, 01:24:33.261 "write_zeroes": true, 01:24:33.261 "zcopy": true, 01:24:33.261 "get_zone_info": false, 01:24:33.261 "zone_management": false, 01:24:33.261 "zone_append": false, 01:24:33.261 "compare": false, 01:24:33.261 "compare_and_write": false, 01:24:33.261 "abort": true, 01:24:33.261 "seek_hole": false, 01:24:33.261 "seek_data": false, 01:24:33.261 "copy": true, 01:24:33.261 "nvme_iov_md": false 01:24:33.261 }, 01:24:33.261 "memory_domains": [ 01:24:33.261 { 01:24:33.261 "dma_device_id": "system", 01:24:33.261 "dma_device_type": 1 01:24:33.261 }, 01:24:33.261 { 01:24:33.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:33.261 "dma_device_type": 2 01:24:33.261 } 01:24:33.261 ], 01:24:33.261 "driver_specific": {} 01:24:33.261 } 01:24:33.261 ] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:33.261 "name": "Existed_Raid", 01:24:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.261 "strip_size_kb": 64, 01:24:33.261 "state": "configuring", 01:24:33.261 "raid_level": "raid0", 01:24:33.261 "superblock": false, 01:24:33.261 "num_base_bdevs": 4, 01:24:33.261 "num_base_bdevs_discovered": 1, 01:24:33.261 "num_base_bdevs_operational": 4, 01:24:33.261 "base_bdevs_list": [ 01:24:33.261 { 01:24:33.261 "name": "BaseBdev1", 01:24:33.261 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:33.261 "is_configured": true, 01:24:33.261 "data_offset": 0, 01:24:33.261 "data_size": 65536 01:24:33.261 }, 01:24:33.261 { 01:24:33.261 "name": "BaseBdev2", 01:24:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.261 "is_configured": false, 01:24:33.261 "data_offset": 0, 01:24:33.261 "data_size": 0 01:24:33.261 }, 01:24:33.261 { 01:24:33.261 "name": "BaseBdev3", 01:24:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.261 "is_configured": false, 01:24:33.261 "data_offset": 0, 01:24:33.261 "data_size": 0 01:24:33.261 }, 01:24:33.261 { 01:24:33.261 "name": "BaseBdev4", 01:24:33.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.261 "is_configured": false, 01:24:33.261 "data_offset": 0, 01:24:33.261 "data_size": 0 01:24:33.261 } 01:24:33.261 ] 01:24:33.261 }' 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:33.261 05:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.826 [2024-12-09 05:19:25.220885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:33.826 [2024-12-09 05:19:25.220955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.826 [2024-12-09 05:19:25.228967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:33.826 [2024-12-09 05:19:25.231563] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:33.826 [2024-12-09 05:19:25.231628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:33.826 [2024-12-09 05:19:25.231649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:33.826 [2024-12-09 05:19:25.231671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:33.826 [2024-12-09 05:19:25.231684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:24:33.826 [2024-12-09 05:19:25.231703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:33.826 "name": "Existed_Raid", 01:24:33.826 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.826 "strip_size_kb": 64, 01:24:33.826 "state": "configuring", 01:24:33.826 "raid_level": "raid0", 01:24:33.826 "superblock": false, 01:24:33.826 "num_base_bdevs": 4, 01:24:33.826 "num_base_bdevs_discovered": 1, 01:24:33.826 "num_base_bdevs_operational": 4, 01:24:33.826 "base_bdevs_list": [ 01:24:33.826 { 01:24:33.826 "name": "BaseBdev1", 01:24:33.826 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:33.826 "is_configured": true, 01:24:33.826 "data_offset": 0, 01:24:33.826 "data_size": 65536 01:24:33.826 }, 01:24:33.826 { 01:24:33.826 "name": "BaseBdev2", 01:24:33.826 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.826 "is_configured": false, 01:24:33.826 "data_offset": 0, 01:24:33.826 "data_size": 0 01:24:33.826 }, 01:24:33.826 { 01:24:33.826 "name": "BaseBdev3", 01:24:33.826 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.826 "is_configured": false, 01:24:33.826 "data_offset": 0, 01:24:33.826 "data_size": 0 01:24:33.826 }, 01:24:33.826 { 01:24:33.826 "name": "BaseBdev4", 01:24:33.826 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:33.826 "is_configured": false, 01:24:33.826 "data_offset": 0, 01:24:33.826 "data_size": 0 01:24:33.826 } 01:24:33.826 ] 01:24:33.826 }' 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:33.826 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.392 [2024-12-09 05:19:25.807211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:34.392 BaseBdev2 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.392 [ 01:24:34.392 { 01:24:34.392 "name": "BaseBdev2", 01:24:34.392 "aliases": [ 01:24:34.392 "9be575a8-ce23-44e1-8234-ffb7a2de5fe0" 01:24:34.392 ], 01:24:34.392 "product_name": "Malloc disk", 01:24:34.392 "block_size": 512, 01:24:34.392 "num_blocks": 65536, 01:24:34.392 "uuid": "9be575a8-ce23-44e1-8234-ffb7a2de5fe0", 01:24:34.392 "assigned_rate_limits": { 01:24:34.392 "rw_ios_per_sec": 0, 01:24:34.392 "rw_mbytes_per_sec": 0, 01:24:34.392 "r_mbytes_per_sec": 0, 01:24:34.392 "w_mbytes_per_sec": 0 01:24:34.392 }, 01:24:34.392 "claimed": true, 01:24:34.392 "claim_type": "exclusive_write", 01:24:34.392 "zoned": false, 01:24:34.392 "supported_io_types": { 01:24:34.392 "read": true, 01:24:34.392 "write": true, 01:24:34.392 "unmap": true, 01:24:34.392 "flush": true, 01:24:34.392 "reset": true, 01:24:34.392 "nvme_admin": false, 01:24:34.392 "nvme_io": false, 01:24:34.392 "nvme_io_md": false, 01:24:34.392 "write_zeroes": true, 01:24:34.392 "zcopy": true, 01:24:34.392 "get_zone_info": false, 01:24:34.392 "zone_management": false, 01:24:34.392 "zone_append": false, 01:24:34.392 "compare": false, 01:24:34.392 "compare_and_write": false, 01:24:34.392 "abort": true, 01:24:34.392 "seek_hole": false, 01:24:34.392 "seek_data": false, 01:24:34.392 "copy": true, 01:24:34.392 "nvme_iov_md": false 01:24:34.392 }, 01:24:34.392 "memory_domains": [ 01:24:34.392 { 01:24:34.392 "dma_device_id": "system", 01:24:34.392 "dma_device_type": 1 01:24:34.392 }, 01:24:34.392 { 01:24:34.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:34.392 "dma_device_type": 2 01:24:34.392 } 01:24:34.392 ], 01:24:34.392 "driver_specific": {} 01:24:34.392 } 01:24:34.392 ] 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.392 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:34.392 "name": "Existed_Raid", 01:24:34.392 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:34.392 "strip_size_kb": 64, 01:24:34.392 "state": "configuring", 01:24:34.392 "raid_level": "raid0", 01:24:34.392 "superblock": false, 01:24:34.392 "num_base_bdevs": 4, 01:24:34.392 "num_base_bdevs_discovered": 2, 01:24:34.392 "num_base_bdevs_operational": 4, 01:24:34.392 "base_bdevs_list": [ 01:24:34.392 { 01:24:34.392 "name": "BaseBdev1", 01:24:34.392 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:34.392 "is_configured": true, 01:24:34.392 "data_offset": 0, 01:24:34.392 "data_size": 65536 01:24:34.392 }, 01:24:34.392 { 01:24:34.392 "name": "BaseBdev2", 01:24:34.392 "uuid": "9be575a8-ce23-44e1-8234-ffb7a2de5fe0", 01:24:34.392 "is_configured": true, 01:24:34.392 "data_offset": 0, 01:24:34.392 "data_size": 65536 01:24:34.392 }, 01:24:34.392 { 01:24:34.392 "name": "BaseBdev3", 01:24:34.392 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:34.392 "is_configured": false, 01:24:34.392 "data_offset": 0, 01:24:34.392 "data_size": 0 01:24:34.393 }, 01:24:34.393 { 01:24:34.393 "name": "BaseBdev4", 01:24:34.393 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:34.393 "is_configured": false, 01:24:34.393 "data_offset": 0, 01:24:34.393 "data_size": 0 01:24:34.393 } 01:24:34.393 ] 01:24:34.393 }' 01:24:34.393 05:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:34.393 05:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.959 [2024-12-09 05:19:26.397241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:34.959 BaseBdev3 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.959 [ 01:24:34.959 { 01:24:34.959 "name": "BaseBdev3", 01:24:34.959 "aliases": [ 01:24:34.959 "7d871346-4a63-4739-a0e4-7c7d31e388a8" 01:24:34.959 ], 01:24:34.959 "product_name": "Malloc disk", 01:24:34.959 "block_size": 512, 01:24:34.959 "num_blocks": 65536, 01:24:34.959 "uuid": "7d871346-4a63-4739-a0e4-7c7d31e388a8", 01:24:34.959 "assigned_rate_limits": { 01:24:34.959 "rw_ios_per_sec": 0, 01:24:34.959 "rw_mbytes_per_sec": 0, 01:24:34.959 "r_mbytes_per_sec": 0, 01:24:34.959 "w_mbytes_per_sec": 0 01:24:34.959 }, 01:24:34.959 "claimed": true, 01:24:34.959 "claim_type": "exclusive_write", 01:24:34.959 "zoned": false, 01:24:34.959 "supported_io_types": { 01:24:34.959 "read": true, 01:24:34.959 "write": true, 01:24:34.959 "unmap": true, 01:24:34.959 "flush": true, 01:24:34.959 "reset": true, 01:24:34.959 "nvme_admin": false, 01:24:34.959 "nvme_io": false, 01:24:34.959 "nvme_io_md": false, 01:24:34.959 "write_zeroes": true, 01:24:34.959 "zcopy": true, 01:24:34.959 "get_zone_info": false, 01:24:34.959 "zone_management": false, 01:24:34.959 "zone_append": false, 01:24:34.959 "compare": false, 01:24:34.959 "compare_and_write": false, 01:24:34.959 "abort": true, 01:24:34.959 "seek_hole": false, 01:24:34.959 "seek_data": false, 01:24:34.959 "copy": true, 01:24:34.959 "nvme_iov_md": false 01:24:34.959 }, 01:24:34.959 "memory_domains": [ 01:24:34.959 { 01:24:34.959 "dma_device_id": "system", 01:24:34.959 "dma_device_type": 1 01:24:34.959 }, 01:24:34.959 { 01:24:34.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:34.959 "dma_device_type": 2 01:24:34.959 } 01:24:34.959 ], 01:24:34.959 "driver_specific": {} 01:24:34.959 } 01:24:34.959 ] 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:34.959 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:34.960 "name": "Existed_Raid", 01:24:34.960 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:34.960 "strip_size_kb": 64, 01:24:34.960 "state": "configuring", 01:24:34.960 "raid_level": "raid0", 01:24:34.960 "superblock": false, 01:24:34.960 "num_base_bdevs": 4, 01:24:34.960 "num_base_bdevs_discovered": 3, 01:24:34.960 "num_base_bdevs_operational": 4, 01:24:34.960 "base_bdevs_list": [ 01:24:34.960 { 01:24:34.960 "name": "BaseBdev1", 01:24:34.960 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:34.960 "is_configured": true, 01:24:34.960 "data_offset": 0, 01:24:34.960 "data_size": 65536 01:24:34.960 }, 01:24:34.960 { 01:24:34.960 "name": "BaseBdev2", 01:24:34.960 "uuid": "9be575a8-ce23-44e1-8234-ffb7a2de5fe0", 01:24:34.960 "is_configured": true, 01:24:34.960 "data_offset": 0, 01:24:34.960 "data_size": 65536 01:24:34.960 }, 01:24:34.960 { 01:24:34.960 "name": "BaseBdev3", 01:24:34.960 "uuid": "7d871346-4a63-4739-a0e4-7c7d31e388a8", 01:24:34.960 "is_configured": true, 01:24:34.960 "data_offset": 0, 01:24:34.960 "data_size": 65536 01:24:34.960 }, 01:24:34.960 { 01:24:34.960 "name": "BaseBdev4", 01:24:34.960 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:34.960 "is_configured": false, 01:24:34.960 "data_offset": 0, 01:24:34.960 "data_size": 0 01:24:34.960 } 01:24:34.960 ] 01:24:34.960 }' 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:34.960 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:35.542 [2024-12-09 05:19:26.986091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:24:35.542 [2024-12-09 05:19:26.986169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:24:35.542 [2024-12-09 05:19:26.986186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 01:24:35.542 [2024-12-09 05:19:26.986630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:24:35.542 [2024-12-09 05:19:26.986899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:24:35.542 [2024-12-09 05:19:26.986933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:24:35.542 [2024-12-09 05:19:26.987275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:35.542 BaseBdev4 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:35.542 05:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:35.542 [ 01:24:35.542 { 01:24:35.542 "name": "BaseBdev4", 01:24:35.542 "aliases": [ 01:24:35.542 "b696f240-804f-455b-92e3-566451f24880" 01:24:35.542 ], 01:24:35.542 "product_name": "Malloc disk", 01:24:35.542 "block_size": 512, 01:24:35.542 "num_blocks": 65536, 01:24:35.542 "uuid": "b696f240-804f-455b-92e3-566451f24880", 01:24:35.542 "assigned_rate_limits": { 01:24:35.542 "rw_ios_per_sec": 0, 01:24:35.542 "rw_mbytes_per_sec": 0, 01:24:35.542 "r_mbytes_per_sec": 0, 01:24:35.542 "w_mbytes_per_sec": 0 01:24:35.542 }, 01:24:35.542 "claimed": true, 01:24:35.542 "claim_type": "exclusive_write", 01:24:35.542 "zoned": false, 01:24:35.542 "supported_io_types": { 01:24:35.542 "read": true, 01:24:35.542 "write": true, 01:24:35.542 "unmap": true, 01:24:35.542 "flush": true, 01:24:35.542 "reset": true, 01:24:35.542 "nvme_admin": false, 01:24:35.542 "nvme_io": false, 01:24:35.542 "nvme_io_md": false, 01:24:35.542 "write_zeroes": true, 01:24:35.542 "zcopy": true, 01:24:35.542 "get_zone_info": false, 01:24:35.542 "zone_management": false, 01:24:35.542 "zone_append": false, 01:24:35.542 "compare": false, 01:24:35.542 "compare_and_write": false, 01:24:35.542 "abort": true, 01:24:35.542 "seek_hole": false, 01:24:35.542 "seek_data": false, 01:24:35.542 "copy": true, 01:24:35.542 "nvme_iov_md": false 01:24:35.542 }, 01:24:35.542 "memory_domains": [ 01:24:35.543 { 01:24:35.543 "dma_device_id": "system", 01:24:35.543 "dma_device_type": 1 01:24:35.543 }, 01:24:35.543 { 01:24:35.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:35.543 "dma_device_type": 2 01:24:35.543 } 01:24:35.543 ], 01:24:35.543 "driver_specific": {} 01:24:35.543 } 01:24:35.543 ] 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:35.543 "name": "Existed_Raid", 01:24:35.543 "uuid": "2f8bda0f-37a7-4ea1-a547-b328474f41eb", 01:24:35.543 "strip_size_kb": 64, 01:24:35.543 "state": "online", 01:24:35.543 "raid_level": "raid0", 01:24:35.543 "superblock": false, 01:24:35.543 "num_base_bdevs": 4, 01:24:35.543 "num_base_bdevs_discovered": 4, 01:24:35.543 "num_base_bdevs_operational": 4, 01:24:35.543 "base_bdevs_list": [ 01:24:35.543 { 01:24:35.543 "name": "BaseBdev1", 01:24:35.543 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:35.543 "is_configured": true, 01:24:35.543 "data_offset": 0, 01:24:35.543 "data_size": 65536 01:24:35.543 }, 01:24:35.543 { 01:24:35.543 "name": "BaseBdev2", 01:24:35.543 "uuid": "9be575a8-ce23-44e1-8234-ffb7a2de5fe0", 01:24:35.543 "is_configured": true, 01:24:35.543 "data_offset": 0, 01:24:35.543 "data_size": 65536 01:24:35.543 }, 01:24:35.543 { 01:24:35.543 "name": "BaseBdev3", 01:24:35.543 "uuid": "7d871346-4a63-4739-a0e4-7c7d31e388a8", 01:24:35.543 "is_configured": true, 01:24:35.543 "data_offset": 0, 01:24:35.543 "data_size": 65536 01:24:35.543 }, 01:24:35.543 { 01:24:35.543 "name": "BaseBdev4", 01:24:35.543 "uuid": "b696f240-804f-455b-92e3-566451f24880", 01:24:35.543 "is_configured": true, 01:24:35.543 "data_offset": 0, 01:24:35.543 "data_size": 65536 01:24:35.543 } 01:24:35.543 ] 01:24:35.543 }' 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:35.543 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.110 [2024-12-09 05:19:27.554811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.110 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:36.110 "name": "Existed_Raid", 01:24:36.110 "aliases": [ 01:24:36.110 "2f8bda0f-37a7-4ea1-a547-b328474f41eb" 01:24:36.110 ], 01:24:36.110 "product_name": "Raid Volume", 01:24:36.110 "block_size": 512, 01:24:36.110 "num_blocks": 262144, 01:24:36.110 "uuid": "2f8bda0f-37a7-4ea1-a547-b328474f41eb", 01:24:36.110 "assigned_rate_limits": { 01:24:36.110 "rw_ios_per_sec": 0, 01:24:36.110 "rw_mbytes_per_sec": 0, 01:24:36.110 "r_mbytes_per_sec": 0, 01:24:36.110 "w_mbytes_per_sec": 0 01:24:36.110 }, 01:24:36.110 "claimed": false, 01:24:36.110 "zoned": false, 01:24:36.110 "supported_io_types": { 01:24:36.110 "read": true, 01:24:36.110 "write": true, 01:24:36.110 "unmap": true, 01:24:36.110 "flush": true, 01:24:36.110 "reset": true, 01:24:36.110 "nvme_admin": false, 01:24:36.110 "nvme_io": false, 01:24:36.110 "nvme_io_md": false, 01:24:36.110 "write_zeroes": true, 01:24:36.110 "zcopy": false, 01:24:36.110 "get_zone_info": false, 01:24:36.110 "zone_management": false, 01:24:36.110 "zone_append": false, 01:24:36.110 "compare": false, 01:24:36.110 "compare_and_write": false, 01:24:36.110 "abort": false, 01:24:36.110 "seek_hole": false, 01:24:36.110 "seek_data": false, 01:24:36.110 "copy": false, 01:24:36.110 "nvme_iov_md": false 01:24:36.110 }, 01:24:36.110 "memory_domains": [ 01:24:36.110 { 01:24:36.110 "dma_device_id": "system", 01:24:36.110 "dma_device_type": 1 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:36.110 "dma_device_type": 2 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "system", 01:24:36.110 "dma_device_type": 1 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:36.110 "dma_device_type": 2 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "system", 01:24:36.110 "dma_device_type": 1 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:36.110 "dma_device_type": 2 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "system", 01:24:36.110 "dma_device_type": 1 01:24:36.110 }, 01:24:36.110 { 01:24:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:36.110 "dma_device_type": 2 01:24:36.110 } 01:24:36.111 ], 01:24:36.111 "driver_specific": { 01:24:36.111 "raid": { 01:24:36.111 "uuid": "2f8bda0f-37a7-4ea1-a547-b328474f41eb", 01:24:36.111 "strip_size_kb": 64, 01:24:36.111 "state": "online", 01:24:36.111 "raid_level": "raid0", 01:24:36.111 "superblock": false, 01:24:36.111 "num_base_bdevs": 4, 01:24:36.111 "num_base_bdevs_discovered": 4, 01:24:36.111 "num_base_bdevs_operational": 4, 01:24:36.111 "base_bdevs_list": [ 01:24:36.111 { 01:24:36.111 "name": "BaseBdev1", 01:24:36.111 "uuid": "71795d47-0967-4c31-aac3-4660319a90ce", 01:24:36.111 "is_configured": true, 01:24:36.111 "data_offset": 0, 01:24:36.111 "data_size": 65536 01:24:36.111 }, 01:24:36.111 { 01:24:36.111 "name": "BaseBdev2", 01:24:36.111 "uuid": "9be575a8-ce23-44e1-8234-ffb7a2de5fe0", 01:24:36.111 "is_configured": true, 01:24:36.111 "data_offset": 0, 01:24:36.111 "data_size": 65536 01:24:36.111 }, 01:24:36.111 { 01:24:36.111 "name": "BaseBdev3", 01:24:36.111 "uuid": "7d871346-4a63-4739-a0e4-7c7d31e388a8", 01:24:36.111 "is_configured": true, 01:24:36.111 "data_offset": 0, 01:24:36.111 "data_size": 65536 01:24:36.111 }, 01:24:36.111 { 01:24:36.111 "name": "BaseBdev4", 01:24:36.111 "uuid": "b696f240-804f-455b-92e3-566451f24880", 01:24:36.111 "is_configured": true, 01:24:36.111 "data_offset": 0, 01:24:36.111 "data_size": 65536 01:24:36.111 } 01:24:36.111 ] 01:24:36.111 } 01:24:36.111 } 01:24:36.111 }' 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:24:36.111 BaseBdev2 01:24:36.111 BaseBdev3 01:24:36.111 BaseBdev4' 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:36.111 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.370 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.370 [2024-12-09 05:19:27.926461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:36.370 [2024-12-09 05:19:27.926523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:36.370 [2024-12-09 05:19:27.926595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 01:24:36.666 05:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:36.666 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:24:36.666 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:36.666 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:36.667 "name": "Existed_Raid", 01:24:36.667 "uuid": "2f8bda0f-37a7-4ea1-a547-b328474f41eb", 01:24:36.667 "strip_size_kb": 64, 01:24:36.667 "state": "offline", 01:24:36.667 "raid_level": "raid0", 01:24:36.667 "superblock": false, 01:24:36.667 "num_base_bdevs": 4, 01:24:36.667 "num_base_bdevs_discovered": 3, 01:24:36.667 "num_base_bdevs_operational": 3, 01:24:36.667 "base_bdevs_list": [ 01:24:36.667 { 01:24:36.667 "name": null, 01:24:36.667 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:36.667 "is_configured": false, 01:24:36.667 "data_offset": 0, 01:24:36.667 "data_size": 65536 01:24:36.667 }, 01:24:36.667 { 01:24:36.667 "name": "BaseBdev2", 01:24:36.667 "uuid": "9be575a8-ce23-44e1-8234-ffb7a2de5fe0", 01:24:36.667 "is_configured": true, 01:24:36.667 "data_offset": 0, 01:24:36.667 "data_size": 65536 01:24:36.667 }, 01:24:36.667 { 01:24:36.667 "name": "BaseBdev3", 01:24:36.667 "uuid": "7d871346-4a63-4739-a0e4-7c7d31e388a8", 01:24:36.667 "is_configured": true, 01:24:36.667 "data_offset": 0, 01:24:36.667 "data_size": 65536 01:24:36.667 }, 01:24:36.667 { 01:24:36.667 "name": "BaseBdev4", 01:24:36.667 "uuid": "b696f240-804f-455b-92e3-566451f24880", 01:24:36.667 "is_configured": true, 01:24:36.667 "data_offset": 0, 01:24:36.667 "data_size": 65536 01:24:36.667 } 01:24:36.667 ] 01:24:36.667 }' 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:36.667 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:36.928 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.186 [2024-12-09 05:19:28.575084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.186 [2024-12-09 05:19:28.706320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.186 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.445 [2024-12-09 05:19:28.842961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:24:37.445 [2024-12-09 05:19:28.843061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.445 05:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.445 BaseBdev2 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.445 [ 01:24:37.445 { 01:24:37.445 "name": "BaseBdev2", 01:24:37.445 "aliases": [ 01:24:37.445 "327378a2-cfd0-4a4b-a5cf-bdafd548b928" 01:24:37.445 ], 01:24:37.445 "product_name": "Malloc disk", 01:24:37.445 "block_size": 512, 01:24:37.445 "num_blocks": 65536, 01:24:37.445 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:37.445 "assigned_rate_limits": { 01:24:37.445 "rw_ios_per_sec": 0, 01:24:37.445 "rw_mbytes_per_sec": 0, 01:24:37.445 "r_mbytes_per_sec": 0, 01:24:37.445 "w_mbytes_per_sec": 0 01:24:37.445 }, 01:24:37.445 "claimed": false, 01:24:37.445 "zoned": false, 01:24:37.445 "supported_io_types": { 01:24:37.445 "read": true, 01:24:37.445 "write": true, 01:24:37.445 "unmap": true, 01:24:37.445 "flush": true, 01:24:37.445 "reset": true, 01:24:37.445 "nvme_admin": false, 01:24:37.445 "nvme_io": false, 01:24:37.445 "nvme_io_md": false, 01:24:37.445 "write_zeroes": true, 01:24:37.445 "zcopy": true, 01:24:37.445 "get_zone_info": false, 01:24:37.445 "zone_management": false, 01:24:37.445 "zone_append": false, 01:24:37.445 "compare": false, 01:24:37.445 "compare_and_write": false, 01:24:37.445 "abort": true, 01:24:37.445 "seek_hole": false, 01:24:37.445 "seek_data": false, 01:24:37.445 "copy": true, 01:24:37.445 "nvme_iov_md": false 01:24:37.445 }, 01:24:37.445 "memory_domains": [ 01:24:37.445 { 01:24:37.445 "dma_device_id": "system", 01:24:37.445 "dma_device_type": 1 01:24:37.445 }, 01:24:37.445 { 01:24:37.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:37.445 "dma_device_type": 2 01:24:37.445 } 01:24:37.445 ], 01:24:37.445 "driver_specific": {} 01:24:37.445 } 01:24:37.445 ] 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.445 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.705 BaseBdev3 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.705 [ 01:24:37.705 { 01:24:37.705 "name": "BaseBdev3", 01:24:37.705 "aliases": [ 01:24:37.705 "3e4ad31f-a08f-4200-a011-80af91586345" 01:24:37.705 ], 01:24:37.705 "product_name": "Malloc disk", 01:24:37.705 "block_size": 512, 01:24:37.705 "num_blocks": 65536, 01:24:37.705 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:37.705 "assigned_rate_limits": { 01:24:37.705 "rw_ios_per_sec": 0, 01:24:37.705 "rw_mbytes_per_sec": 0, 01:24:37.705 "r_mbytes_per_sec": 0, 01:24:37.705 "w_mbytes_per_sec": 0 01:24:37.705 }, 01:24:37.705 "claimed": false, 01:24:37.705 "zoned": false, 01:24:37.705 "supported_io_types": { 01:24:37.705 "read": true, 01:24:37.705 "write": true, 01:24:37.705 "unmap": true, 01:24:37.705 "flush": true, 01:24:37.705 "reset": true, 01:24:37.705 "nvme_admin": false, 01:24:37.705 "nvme_io": false, 01:24:37.705 "nvme_io_md": false, 01:24:37.705 "write_zeroes": true, 01:24:37.705 "zcopy": true, 01:24:37.705 "get_zone_info": false, 01:24:37.705 "zone_management": false, 01:24:37.705 "zone_append": false, 01:24:37.705 "compare": false, 01:24:37.705 "compare_and_write": false, 01:24:37.705 "abort": true, 01:24:37.705 "seek_hole": false, 01:24:37.705 "seek_data": false, 01:24:37.705 "copy": true, 01:24:37.705 "nvme_iov_md": false 01:24:37.705 }, 01:24:37.705 "memory_domains": [ 01:24:37.705 { 01:24:37.705 "dma_device_id": "system", 01:24:37.705 "dma_device_type": 1 01:24:37.705 }, 01:24:37.705 { 01:24:37.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:37.705 "dma_device_type": 2 01:24:37.705 } 01:24:37.705 ], 01:24:37.705 "driver_specific": {} 01:24:37.705 } 01:24:37.705 ] 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:37.705 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.706 BaseBdev4 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.706 [ 01:24:37.706 { 01:24:37.706 "name": "BaseBdev4", 01:24:37.706 "aliases": [ 01:24:37.706 "e4a2da9f-2c10-46ed-b335-1e8278a71aa2" 01:24:37.706 ], 01:24:37.706 "product_name": "Malloc disk", 01:24:37.706 "block_size": 512, 01:24:37.706 "num_blocks": 65536, 01:24:37.706 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:37.706 "assigned_rate_limits": { 01:24:37.706 "rw_ios_per_sec": 0, 01:24:37.706 "rw_mbytes_per_sec": 0, 01:24:37.706 "r_mbytes_per_sec": 0, 01:24:37.706 "w_mbytes_per_sec": 0 01:24:37.706 }, 01:24:37.706 "claimed": false, 01:24:37.706 "zoned": false, 01:24:37.706 "supported_io_types": { 01:24:37.706 "read": true, 01:24:37.706 "write": true, 01:24:37.706 "unmap": true, 01:24:37.706 "flush": true, 01:24:37.706 "reset": true, 01:24:37.706 "nvme_admin": false, 01:24:37.706 "nvme_io": false, 01:24:37.706 "nvme_io_md": false, 01:24:37.706 "write_zeroes": true, 01:24:37.706 "zcopy": true, 01:24:37.706 "get_zone_info": false, 01:24:37.706 "zone_management": false, 01:24:37.706 "zone_append": false, 01:24:37.706 "compare": false, 01:24:37.706 "compare_and_write": false, 01:24:37.706 "abort": true, 01:24:37.706 "seek_hole": false, 01:24:37.706 "seek_data": false, 01:24:37.706 "copy": true, 01:24:37.706 "nvme_iov_md": false 01:24:37.706 }, 01:24:37.706 "memory_domains": [ 01:24:37.706 { 01:24:37.706 "dma_device_id": "system", 01:24:37.706 "dma_device_type": 1 01:24:37.706 }, 01:24:37.706 { 01:24:37.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:37.706 "dma_device_type": 2 01:24:37.706 } 01:24:37.706 ], 01:24:37.706 "driver_specific": {} 01:24:37.706 } 01:24:37.706 ] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.706 [2024-12-09 05:19:29.197640] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:37.706 [2024-12-09 05:19:29.197720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:37.706 [2024-12-09 05:19:29.197758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:37.706 [2024-12-09 05:19:29.200278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:37.706 [2024-12-09 05:19:29.200436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.706 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:37.706 "name": "Existed_Raid", 01:24:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:37.706 "strip_size_kb": 64, 01:24:37.706 "state": "configuring", 01:24:37.706 "raid_level": "raid0", 01:24:37.706 "superblock": false, 01:24:37.706 "num_base_bdevs": 4, 01:24:37.706 "num_base_bdevs_discovered": 3, 01:24:37.706 "num_base_bdevs_operational": 4, 01:24:37.706 "base_bdevs_list": [ 01:24:37.706 { 01:24:37.706 "name": "BaseBdev1", 01:24:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:37.706 "is_configured": false, 01:24:37.706 "data_offset": 0, 01:24:37.706 "data_size": 0 01:24:37.706 }, 01:24:37.706 { 01:24:37.706 "name": "BaseBdev2", 01:24:37.706 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:37.706 "is_configured": true, 01:24:37.706 "data_offset": 0, 01:24:37.706 "data_size": 65536 01:24:37.706 }, 01:24:37.706 { 01:24:37.706 "name": "BaseBdev3", 01:24:37.707 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:37.707 "is_configured": true, 01:24:37.707 "data_offset": 0, 01:24:37.707 "data_size": 65536 01:24:37.707 }, 01:24:37.707 { 01:24:37.707 "name": "BaseBdev4", 01:24:37.707 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:37.707 "is_configured": true, 01:24:37.707 "data_offset": 0, 01:24:37.707 "data_size": 65536 01:24:37.707 } 01:24:37.707 ] 01:24:37.707 }' 01:24:37.707 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:37.707 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.273 [2024-12-09 05:19:29.721841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.273 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:38.273 "name": "Existed_Raid", 01:24:38.273 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:38.273 "strip_size_kb": 64, 01:24:38.273 "state": "configuring", 01:24:38.273 "raid_level": "raid0", 01:24:38.273 "superblock": false, 01:24:38.273 "num_base_bdevs": 4, 01:24:38.273 "num_base_bdevs_discovered": 2, 01:24:38.273 "num_base_bdevs_operational": 4, 01:24:38.273 "base_bdevs_list": [ 01:24:38.273 { 01:24:38.273 "name": "BaseBdev1", 01:24:38.273 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:38.273 "is_configured": false, 01:24:38.273 "data_offset": 0, 01:24:38.273 "data_size": 0 01:24:38.273 }, 01:24:38.273 { 01:24:38.273 "name": null, 01:24:38.273 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:38.273 "is_configured": false, 01:24:38.273 "data_offset": 0, 01:24:38.273 "data_size": 65536 01:24:38.273 }, 01:24:38.273 { 01:24:38.274 "name": "BaseBdev3", 01:24:38.274 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:38.274 "is_configured": true, 01:24:38.274 "data_offset": 0, 01:24:38.274 "data_size": 65536 01:24:38.274 }, 01:24:38.274 { 01:24:38.274 "name": "BaseBdev4", 01:24:38.274 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:38.274 "is_configured": true, 01:24:38.274 "data_offset": 0, 01:24:38.274 "data_size": 65536 01:24:38.274 } 01:24:38.274 ] 01:24:38.274 }' 01:24:38.274 05:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:38.274 05:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.839 [2024-12-09 05:19:30.333437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:38.839 BaseBdev1 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.839 [ 01:24:38.839 { 01:24:38.839 "name": "BaseBdev1", 01:24:38.839 "aliases": [ 01:24:38.839 "f46a917c-81c7-4180-9e95-62331dc90207" 01:24:38.839 ], 01:24:38.839 "product_name": "Malloc disk", 01:24:38.839 "block_size": 512, 01:24:38.839 "num_blocks": 65536, 01:24:38.839 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:38.839 "assigned_rate_limits": { 01:24:38.839 "rw_ios_per_sec": 0, 01:24:38.839 "rw_mbytes_per_sec": 0, 01:24:38.839 "r_mbytes_per_sec": 0, 01:24:38.839 "w_mbytes_per_sec": 0 01:24:38.839 }, 01:24:38.839 "claimed": true, 01:24:38.839 "claim_type": "exclusive_write", 01:24:38.839 "zoned": false, 01:24:38.839 "supported_io_types": { 01:24:38.839 "read": true, 01:24:38.839 "write": true, 01:24:38.839 "unmap": true, 01:24:38.839 "flush": true, 01:24:38.839 "reset": true, 01:24:38.839 "nvme_admin": false, 01:24:38.839 "nvme_io": false, 01:24:38.839 "nvme_io_md": false, 01:24:38.839 "write_zeroes": true, 01:24:38.839 "zcopy": true, 01:24:38.839 "get_zone_info": false, 01:24:38.839 "zone_management": false, 01:24:38.839 "zone_append": false, 01:24:38.839 "compare": false, 01:24:38.839 "compare_and_write": false, 01:24:38.839 "abort": true, 01:24:38.839 "seek_hole": false, 01:24:38.839 "seek_data": false, 01:24:38.839 "copy": true, 01:24:38.839 "nvme_iov_md": false 01:24:38.839 }, 01:24:38.839 "memory_domains": [ 01:24:38.839 { 01:24:38.839 "dma_device_id": "system", 01:24:38.839 "dma_device_type": 1 01:24:38.839 }, 01:24:38.839 { 01:24:38.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:38.839 "dma_device_type": 2 01:24:38.839 } 01:24:38.839 ], 01:24:38.839 "driver_specific": {} 01:24:38.839 } 01:24:38.839 ] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:38.839 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:38.839 "name": "Existed_Raid", 01:24:38.839 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:38.839 "strip_size_kb": 64, 01:24:38.839 "state": "configuring", 01:24:38.839 "raid_level": "raid0", 01:24:38.839 "superblock": false, 01:24:38.839 "num_base_bdevs": 4, 01:24:38.839 "num_base_bdevs_discovered": 3, 01:24:38.839 "num_base_bdevs_operational": 4, 01:24:38.839 "base_bdevs_list": [ 01:24:38.839 { 01:24:38.839 "name": "BaseBdev1", 01:24:38.839 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:38.839 "is_configured": true, 01:24:38.839 "data_offset": 0, 01:24:38.839 "data_size": 65536 01:24:38.839 }, 01:24:38.839 { 01:24:38.840 "name": null, 01:24:38.840 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:38.840 "is_configured": false, 01:24:38.840 "data_offset": 0, 01:24:38.840 "data_size": 65536 01:24:38.840 }, 01:24:38.840 { 01:24:38.840 "name": "BaseBdev3", 01:24:38.840 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:38.840 "is_configured": true, 01:24:38.840 "data_offset": 0, 01:24:38.840 "data_size": 65536 01:24:38.840 }, 01:24:38.840 { 01:24:38.840 "name": "BaseBdev4", 01:24:38.840 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:38.840 "is_configured": true, 01:24:38.840 "data_offset": 0, 01:24:38.840 "data_size": 65536 01:24:38.840 } 01:24:38.840 ] 01:24:38.840 }' 01:24:38.840 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:38.840 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.405 [2024-12-09 05:19:30.941793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:39.405 "name": "Existed_Raid", 01:24:39.405 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:39.405 "strip_size_kb": 64, 01:24:39.405 "state": "configuring", 01:24:39.405 "raid_level": "raid0", 01:24:39.405 "superblock": false, 01:24:39.405 "num_base_bdevs": 4, 01:24:39.405 "num_base_bdevs_discovered": 2, 01:24:39.405 "num_base_bdevs_operational": 4, 01:24:39.405 "base_bdevs_list": [ 01:24:39.405 { 01:24:39.405 "name": "BaseBdev1", 01:24:39.405 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:39.405 "is_configured": true, 01:24:39.405 "data_offset": 0, 01:24:39.405 "data_size": 65536 01:24:39.405 }, 01:24:39.405 { 01:24:39.405 "name": null, 01:24:39.405 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:39.405 "is_configured": false, 01:24:39.405 "data_offset": 0, 01:24:39.405 "data_size": 65536 01:24:39.405 }, 01:24:39.405 { 01:24:39.405 "name": null, 01:24:39.405 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:39.405 "is_configured": false, 01:24:39.405 "data_offset": 0, 01:24:39.405 "data_size": 65536 01:24:39.405 }, 01:24:39.405 { 01:24:39.405 "name": "BaseBdev4", 01:24:39.405 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:39.405 "is_configured": true, 01:24:39.405 "data_offset": 0, 01:24:39.405 "data_size": 65536 01:24:39.405 } 01:24:39.405 ] 01:24:39.405 }' 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:39.405 05:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.971 [2024-12-09 05:19:31.517940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.971 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:39.971 "name": "Existed_Raid", 01:24:39.971 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:39.971 "strip_size_kb": 64, 01:24:39.971 "state": "configuring", 01:24:39.971 "raid_level": "raid0", 01:24:39.971 "superblock": false, 01:24:39.971 "num_base_bdevs": 4, 01:24:39.971 "num_base_bdevs_discovered": 3, 01:24:39.971 "num_base_bdevs_operational": 4, 01:24:39.971 "base_bdevs_list": [ 01:24:39.971 { 01:24:39.971 "name": "BaseBdev1", 01:24:39.971 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:39.971 "is_configured": true, 01:24:39.971 "data_offset": 0, 01:24:39.971 "data_size": 65536 01:24:39.971 }, 01:24:39.971 { 01:24:39.971 "name": null, 01:24:39.971 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:39.971 "is_configured": false, 01:24:39.971 "data_offset": 0, 01:24:39.971 "data_size": 65536 01:24:39.971 }, 01:24:39.971 { 01:24:39.971 "name": "BaseBdev3", 01:24:39.971 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:39.971 "is_configured": true, 01:24:39.971 "data_offset": 0, 01:24:39.971 "data_size": 65536 01:24:39.971 }, 01:24:39.971 { 01:24:39.971 "name": "BaseBdev4", 01:24:39.971 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:39.971 "is_configured": true, 01:24:39.972 "data_offset": 0, 01:24:39.972 "data_size": 65536 01:24:39.972 } 01:24:39.972 ] 01:24:39.972 }' 01:24:39.972 05:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:39.972 05:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:40.538 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:40.538 [2024-12-09 05:19:32.110194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:40.796 "name": "Existed_Raid", 01:24:40.796 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:40.796 "strip_size_kb": 64, 01:24:40.796 "state": "configuring", 01:24:40.796 "raid_level": "raid0", 01:24:40.796 "superblock": false, 01:24:40.796 "num_base_bdevs": 4, 01:24:40.796 "num_base_bdevs_discovered": 2, 01:24:40.796 "num_base_bdevs_operational": 4, 01:24:40.796 "base_bdevs_list": [ 01:24:40.796 { 01:24:40.796 "name": null, 01:24:40.796 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:40.796 "is_configured": false, 01:24:40.796 "data_offset": 0, 01:24:40.796 "data_size": 65536 01:24:40.796 }, 01:24:40.796 { 01:24:40.796 "name": null, 01:24:40.796 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:40.796 "is_configured": false, 01:24:40.796 "data_offset": 0, 01:24:40.796 "data_size": 65536 01:24:40.796 }, 01:24:40.796 { 01:24:40.796 "name": "BaseBdev3", 01:24:40.796 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:40.796 "is_configured": true, 01:24:40.796 "data_offset": 0, 01:24:40.796 "data_size": 65536 01:24:40.796 }, 01:24:40.796 { 01:24:40.796 "name": "BaseBdev4", 01:24:40.796 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:40.796 "is_configured": true, 01:24:40.796 "data_offset": 0, 01:24:40.796 "data_size": 65536 01:24:40.796 } 01:24:40.796 ] 01:24:40.796 }' 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:40.796 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.361 [2024-12-09 05:19:32.775354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:41.361 "name": "Existed_Raid", 01:24:41.361 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:41.361 "strip_size_kb": 64, 01:24:41.361 "state": "configuring", 01:24:41.361 "raid_level": "raid0", 01:24:41.361 "superblock": false, 01:24:41.361 "num_base_bdevs": 4, 01:24:41.361 "num_base_bdevs_discovered": 3, 01:24:41.361 "num_base_bdevs_operational": 4, 01:24:41.361 "base_bdevs_list": [ 01:24:41.361 { 01:24:41.361 "name": null, 01:24:41.361 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:41.361 "is_configured": false, 01:24:41.361 "data_offset": 0, 01:24:41.361 "data_size": 65536 01:24:41.361 }, 01:24:41.361 { 01:24:41.361 "name": "BaseBdev2", 01:24:41.361 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:41.361 "is_configured": true, 01:24:41.361 "data_offset": 0, 01:24:41.361 "data_size": 65536 01:24:41.361 }, 01:24:41.361 { 01:24:41.361 "name": "BaseBdev3", 01:24:41.361 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:41.361 "is_configured": true, 01:24:41.361 "data_offset": 0, 01:24:41.361 "data_size": 65536 01:24:41.361 }, 01:24:41.361 { 01:24:41.361 "name": "BaseBdev4", 01:24:41.361 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:41.361 "is_configured": true, 01:24:41.361 "data_offset": 0, 01:24:41.361 "data_size": 65536 01:24:41.361 } 01:24:41.361 ] 01:24:41.361 }' 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:41.361 05:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f46a917c-81c7-4180-9e95-62331dc90207 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 [2024-12-09 05:19:33.443629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:24:41.927 [2024-12-09 05:19:33.443732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:24:41.927 [2024-12-09 05:19:33.443748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 01:24:41.927 [2024-12-09 05:19:33.444152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:24:41.927 [2024-12-09 05:19:33.444392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:24:41.927 [2024-12-09 05:19:33.444424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:24:41.927 [2024-12-09 05:19:33.444741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:41.927 NewBaseBdev 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 [ 01:24:41.927 { 01:24:41.927 "name": "NewBaseBdev", 01:24:41.927 "aliases": [ 01:24:41.927 "f46a917c-81c7-4180-9e95-62331dc90207" 01:24:41.927 ], 01:24:41.927 "product_name": "Malloc disk", 01:24:41.927 "block_size": 512, 01:24:41.927 "num_blocks": 65536, 01:24:41.927 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:41.927 "assigned_rate_limits": { 01:24:41.927 "rw_ios_per_sec": 0, 01:24:41.927 "rw_mbytes_per_sec": 0, 01:24:41.927 "r_mbytes_per_sec": 0, 01:24:41.927 "w_mbytes_per_sec": 0 01:24:41.927 }, 01:24:41.927 "claimed": true, 01:24:41.927 "claim_type": "exclusive_write", 01:24:41.927 "zoned": false, 01:24:41.927 "supported_io_types": { 01:24:41.927 "read": true, 01:24:41.927 "write": true, 01:24:41.927 "unmap": true, 01:24:41.927 "flush": true, 01:24:41.927 "reset": true, 01:24:41.927 "nvme_admin": false, 01:24:41.927 "nvme_io": false, 01:24:41.927 "nvme_io_md": false, 01:24:41.927 "write_zeroes": true, 01:24:41.927 "zcopy": true, 01:24:41.927 "get_zone_info": false, 01:24:41.927 "zone_management": false, 01:24:41.927 "zone_append": false, 01:24:41.927 "compare": false, 01:24:41.927 "compare_and_write": false, 01:24:41.927 "abort": true, 01:24:41.927 "seek_hole": false, 01:24:41.927 "seek_data": false, 01:24:41.927 "copy": true, 01:24:41.927 "nvme_iov_md": false 01:24:41.927 }, 01:24:41.927 "memory_domains": [ 01:24:41.927 { 01:24:41.927 "dma_device_id": "system", 01:24:41.927 "dma_device_type": 1 01:24:41.927 }, 01:24:41.927 { 01:24:41.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:41.927 "dma_device_type": 2 01:24:41.927 } 01:24:41.927 ], 01:24:41.927 "driver_specific": {} 01:24:41.927 } 01:24:41.927 ] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:41.927 "name": "Existed_Raid", 01:24:41.927 "uuid": "b02b8282-9308-426b-849c-03bf2bdb89c0", 01:24:41.927 "strip_size_kb": 64, 01:24:41.927 "state": "online", 01:24:41.927 "raid_level": "raid0", 01:24:41.927 "superblock": false, 01:24:41.927 "num_base_bdevs": 4, 01:24:41.927 "num_base_bdevs_discovered": 4, 01:24:41.927 "num_base_bdevs_operational": 4, 01:24:41.927 "base_bdevs_list": [ 01:24:41.927 { 01:24:41.927 "name": "NewBaseBdev", 01:24:41.927 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:41.927 "is_configured": true, 01:24:41.927 "data_offset": 0, 01:24:41.927 "data_size": 65536 01:24:41.927 }, 01:24:41.927 { 01:24:41.927 "name": "BaseBdev2", 01:24:41.927 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:41.927 "is_configured": true, 01:24:41.927 "data_offset": 0, 01:24:41.927 "data_size": 65536 01:24:41.927 }, 01:24:41.927 { 01:24:41.927 "name": "BaseBdev3", 01:24:41.927 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:41.927 "is_configured": true, 01:24:41.927 "data_offset": 0, 01:24:41.927 "data_size": 65536 01:24:41.927 }, 01:24:41.927 { 01:24:41.927 "name": "BaseBdev4", 01:24:41.927 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:41.927 "is_configured": true, 01:24:41.927 "data_offset": 0, 01:24:41.927 "data_size": 65536 01:24:41.927 } 01:24:41.927 ] 01:24:41.927 }' 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:41.927 05:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:42.494 [2024-12-09 05:19:34.012957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.494 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:42.494 "name": "Existed_Raid", 01:24:42.494 "aliases": [ 01:24:42.494 "b02b8282-9308-426b-849c-03bf2bdb89c0" 01:24:42.494 ], 01:24:42.494 "product_name": "Raid Volume", 01:24:42.494 "block_size": 512, 01:24:42.494 "num_blocks": 262144, 01:24:42.494 "uuid": "b02b8282-9308-426b-849c-03bf2bdb89c0", 01:24:42.494 "assigned_rate_limits": { 01:24:42.494 "rw_ios_per_sec": 0, 01:24:42.494 "rw_mbytes_per_sec": 0, 01:24:42.494 "r_mbytes_per_sec": 0, 01:24:42.494 "w_mbytes_per_sec": 0 01:24:42.494 }, 01:24:42.494 "claimed": false, 01:24:42.494 "zoned": false, 01:24:42.494 "supported_io_types": { 01:24:42.494 "read": true, 01:24:42.494 "write": true, 01:24:42.494 "unmap": true, 01:24:42.494 "flush": true, 01:24:42.494 "reset": true, 01:24:42.494 "nvme_admin": false, 01:24:42.494 "nvme_io": false, 01:24:42.494 "nvme_io_md": false, 01:24:42.494 "write_zeroes": true, 01:24:42.494 "zcopy": false, 01:24:42.494 "get_zone_info": false, 01:24:42.494 "zone_management": false, 01:24:42.494 "zone_append": false, 01:24:42.494 "compare": false, 01:24:42.494 "compare_and_write": false, 01:24:42.494 "abort": false, 01:24:42.494 "seek_hole": false, 01:24:42.494 "seek_data": false, 01:24:42.494 "copy": false, 01:24:42.494 "nvme_iov_md": false 01:24:42.494 }, 01:24:42.494 "memory_domains": [ 01:24:42.494 { 01:24:42.494 "dma_device_id": "system", 01:24:42.494 "dma_device_type": 1 01:24:42.494 }, 01:24:42.494 { 01:24:42.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:42.494 "dma_device_type": 2 01:24:42.494 }, 01:24:42.494 { 01:24:42.494 "dma_device_id": "system", 01:24:42.494 "dma_device_type": 1 01:24:42.494 }, 01:24:42.494 { 01:24:42.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:42.495 "dma_device_type": 2 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "dma_device_id": "system", 01:24:42.495 "dma_device_type": 1 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:42.495 "dma_device_type": 2 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "dma_device_id": "system", 01:24:42.495 "dma_device_type": 1 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:42.495 "dma_device_type": 2 01:24:42.495 } 01:24:42.495 ], 01:24:42.495 "driver_specific": { 01:24:42.495 "raid": { 01:24:42.495 "uuid": "b02b8282-9308-426b-849c-03bf2bdb89c0", 01:24:42.495 "strip_size_kb": 64, 01:24:42.495 "state": "online", 01:24:42.495 "raid_level": "raid0", 01:24:42.495 "superblock": false, 01:24:42.495 "num_base_bdevs": 4, 01:24:42.495 "num_base_bdevs_discovered": 4, 01:24:42.495 "num_base_bdevs_operational": 4, 01:24:42.495 "base_bdevs_list": [ 01:24:42.495 { 01:24:42.495 "name": "NewBaseBdev", 01:24:42.495 "uuid": "f46a917c-81c7-4180-9e95-62331dc90207", 01:24:42.495 "is_configured": true, 01:24:42.495 "data_offset": 0, 01:24:42.495 "data_size": 65536 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "name": "BaseBdev2", 01:24:42.495 "uuid": "327378a2-cfd0-4a4b-a5cf-bdafd548b928", 01:24:42.495 "is_configured": true, 01:24:42.495 "data_offset": 0, 01:24:42.495 "data_size": 65536 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "name": "BaseBdev3", 01:24:42.495 "uuid": "3e4ad31f-a08f-4200-a011-80af91586345", 01:24:42.495 "is_configured": true, 01:24:42.495 "data_offset": 0, 01:24:42.495 "data_size": 65536 01:24:42.495 }, 01:24:42.495 { 01:24:42.495 "name": "BaseBdev4", 01:24:42.495 "uuid": "e4a2da9f-2c10-46ed-b335-1e8278a71aa2", 01:24:42.495 "is_configured": true, 01:24:42.495 "data_offset": 0, 01:24:42.495 "data_size": 65536 01:24:42.495 } 01:24:42.495 ] 01:24:42.495 } 01:24:42.495 } 01:24:42.495 }' 01:24:42.495 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:42.495 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:24:42.495 BaseBdev2 01:24:42.495 BaseBdev3 01:24:42.495 BaseBdev4' 01:24:42.495 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.753 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:43.012 [2024-12-09 05:19:34.372648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:43.012 [2024-12-09 05:19:34.372698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:43.012 [2024-12-09 05:19:34.372850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:43.012 [2024-12-09 05:19:34.373015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:43.012 [2024-12-09 05:19:34.373036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69391 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69391 ']' 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69391 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69391 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:43.012 killing process with pid 69391 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69391' 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69391 01:24:43.012 [2024-12-09 05:19:34.406655] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:24:43.012 05:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69391 01:24:43.271 [2024-12-09 05:19:34.747581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:24:44.641 01:24:44.641 real 0m12.940s 01:24:44.641 user 0m21.423s 01:24:44.641 sys 0m1.835s 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:24:44.641 ************************************ 01:24:44.641 END TEST raid_state_function_test 01:24:44.641 ************************************ 01:24:44.641 05:19:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 01:24:44.641 05:19:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:24:44.641 05:19:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:44.641 05:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:44.641 ************************************ 01:24:44.641 START TEST raid_state_function_test_sb 01:24:44.641 ************************************ 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70068 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:24:44.641 Process raid pid: 70068 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70068' 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70068 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70068 ']' 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:44.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:44.641 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:44.642 05:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:44.642 [2024-12-09 05:19:36.046728] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:44.642 [2024-12-09 05:19:36.046909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:44.642 [2024-12-09 05:19:36.234608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:44.899 [2024-12-09 05:19:36.356043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:45.158 [2024-12-09 05:19:36.562568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:45.158 [2024-12-09 05:19:36.562622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:45.416 [2024-12-09 05:19:37.011777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:45.416 [2024-12-09 05:19:37.011877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:45.416 [2024-12-09 05:19:37.011905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:45.416 [2024-12-09 05:19:37.011926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:45.416 [2024-12-09 05:19:37.011939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:45.416 [2024-12-09 05:19:37.011956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:45.416 [2024-12-09 05:19:37.011968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:24:45.416 [2024-12-09 05:19:37.011984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:45.416 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:45.674 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:45.674 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:45.674 "name": "Existed_Raid", 01:24:45.674 "uuid": "00f10918-c373-4b7a-ba8b-3b90d8709ffc", 01:24:45.674 "strip_size_kb": 64, 01:24:45.674 "state": "configuring", 01:24:45.674 "raid_level": "raid0", 01:24:45.674 "superblock": true, 01:24:45.674 "num_base_bdevs": 4, 01:24:45.674 "num_base_bdevs_discovered": 0, 01:24:45.674 "num_base_bdevs_operational": 4, 01:24:45.674 "base_bdevs_list": [ 01:24:45.674 { 01:24:45.674 "name": "BaseBdev1", 01:24:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:45.674 "is_configured": false, 01:24:45.674 "data_offset": 0, 01:24:45.674 "data_size": 0 01:24:45.674 }, 01:24:45.674 { 01:24:45.674 "name": "BaseBdev2", 01:24:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:45.674 "is_configured": false, 01:24:45.674 "data_offset": 0, 01:24:45.674 "data_size": 0 01:24:45.674 }, 01:24:45.674 { 01:24:45.674 "name": "BaseBdev3", 01:24:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:45.674 "is_configured": false, 01:24:45.674 "data_offset": 0, 01:24:45.674 "data_size": 0 01:24:45.674 }, 01:24:45.674 { 01:24:45.674 "name": "BaseBdev4", 01:24:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:45.674 "is_configured": false, 01:24:45.674 "data_offset": 0, 01:24:45.674 "data_size": 0 01:24:45.674 } 01:24:45.674 ] 01:24:45.674 }' 01:24:45.674 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:45.674 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:45.931 [2024-12-09 05:19:37.515960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:45.931 [2024-12-09 05:19:37.516029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:45.931 [2024-12-09 05:19:37.523926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:45.931 [2024-12-09 05:19:37.524016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:45.931 [2024-12-09 05:19:37.524035] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:45.931 [2024-12-09 05:19:37.524053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:45.931 [2024-12-09 05:19:37.524065] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:45.931 [2024-12-09 05:19:37.524081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:45.931 [2024-12-09 05:19:37.524092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:24:45.931 [2024-12-09 05:19:37.524108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:45.931 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.189 [2024-12-09 05:19:37.570428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:46.189 BaseBdev1 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.189 [ 01:24:46.189 { 01:24:46.189 "name": "BaseBdev1", 01:24:46.189 "aliases": [ 01:24:46.189 "a7bed46f-2507-43dd-a1a1-9e70208c0373" 01:24:46.189 ], 01:24:46.189 "product_name": "Malloc disk", 01:24:46.189 "block_size": 512, 01:24:46.189 "num_blocks": 65536, 01:24:46.189 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:46.189 "assigned_rate_limits": { 01:24:46.189 "rw_ios_per_sec": 0, 01:24:46.189 "rw_mbytes_per_sec": 0, 01:24:46.189 "r_mbytes_per_sec": 0, 01:24:46.189 "w_mbytes_per_sec": 0 01:24:46.189 }, 01:24:46.189 "claimed": true, 01:24:46.189 "claim_type": "exclusive_write", 01:24:46.189 "zoned": false, 01:24:46.189 "supported_io_types": { 01:24:46.189 "read": true, 01:24:46.189 "write": true, 01:24:46.189 "unmap": true, 01:24:46.189 "flush": true, 01:24:46.189 "reset": true, 01:24:46.189 "nvme_admin": false, 01:24:46.189 "nvme_io": false, 01:24:46.189 "nvme_io_md": false, 01:24:46.189 "write_zeroes": true, 01:24:46.189 "zcopy": true, 01:24:46.189 "get_zone_info": false, 01:24:46.189 "zone_management": false, 01:24:46.189 "zone_append": false, 01:24:46.189 "compare": false, 01:24:46.189 "compare_and_write": false, 01:24:46.189 "abort": true, 01:24:46.189 "seek_hole": false, 01:24:46.189 "seek_data": false, 01:24:46.189 "copy": true, 01:24:46.189 "nvme_iov_md": false 01:24:46.189 }, 01:24:46.189 "memory_domains": [ 01:24:46.189 { 01:24:46.189 "dma_device_id": "system", 01:24:46.189 "dma_device_type": 1 01:24:46.189 }, 01:24:46.189 { 01:24:46.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:46.189 "dma_device_type": 2 01:24:46.189 } 01:24:46.189 ], 01:24:46.189 "driver_specific": {} 01:24:46.189 } 01:24:46.189 ] 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.189 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:46.189 "name": "Existed_Raid", 01:24:46.189 "uuid": "a7ee0d6a-0f79-4c10-b08c-0c4d9ebfe556", 01:24:46.189 "strip_size_kb": 64, 01:24:46.189 "state": "configuring", 01:24:46.189 "raid_level": "raid0", 01:24:46.189 "superblock": true, 01:24:46.189 "num_base_bdevs": 4, 01:24:46.189 "num_base_bdevs_discovered": 1, 01:24:46.189 "num_base_bdevs_operational": 4, 01:24:46.189 "base_bdevs_list": [ 01:24:46.189 { 01:24:46.189 "name": "BaseBdev1", 01:24:46.189 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:46.189 "is_configured": true, 01:24:46.189 "data_offset": 2048, 01:24:46.189 "data_size": 63488 01:24:46.189 }, 01:24:46.189 { 01:24:46.189 "name": "BaseBdev2", 01:24:46.189 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:46.189 "is_configured": false, 01:24:46.189 "data_offset": 0, 01:24:46.189 "data_size": 0 01:24:46.189 }, 01:24:46.189 { 01:24:46.189 "name": "BaseBdev3", 01:24:46.190 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:46.190 "is_configured": false, 01:24:46.190 "data_offset": 0, 01:24:46.190 "data_size": 0 01:24:46.190 }, 01:24:46.190 { 01:24:46.190 "name": "BaseBdev4", 01:24:46.190 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:46.190 "is_configured": false, 01:24:46.190 "data_offset": 0, 01:24:46.190 "data_size": 0 01:24:46.190 } 01:24:46.190 ] 01:24:46.190 }' 01:24:46.190 05:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:46.190 05:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.756 [2024-12-09 05:19:38.178849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:46.756 [2024-12-09 05:19:38.178914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.756 [2024-12-09 05:19:38.186801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:46.756 [2024-12-09 05:19:38.191528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:24:46.756 [2024-12-09 05:19:38.191584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:24:46.756 [2024-12-09 05:19:38.191604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:24:46.756 [2024-12-09 05:19:38.191625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:24:46.756 [2024-12-09 05:19:38.191639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:24:46.756 [2024-12-09 05:19:38.191656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:46.756 "name": "Existed_Raid", 01:24:46.756 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:46.756 "strip_size_kb": 64, 01:24:46.756 "state": "configuring", 01:24:46.756 "raid_level": "raid0", 01:24:46.756 "superblock": true, 01:24:46.756 "num_base_bdevs": 4, 01:24:46.756 "num_base_bdevs_discovered": 1, 01:24:46.756 "num_base_bdevs_operational": 4, 01:24:46.756 "base_bdevs_list": [ 01:24:46.756 { 01:24:46.756 "name": "BaseBdev1", 01:24:46.756 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:46.756 "is_configured": true, 01:24:46.756 "data_offset": 2048, 01:24:46.756 "data_size": 63488 01:24:46.756 }, 01:24:46.756 { 01:24:46.756 "name": "BaseBdev2", 01:24:46.756 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:46.756 "is_configured": false, 01:24:46.756 "data_offset": 0, 01:24:46.756 "data_size": 0 01:24:46.756 }, 01:24:46.756 { 01:24:46.756 "name": "BaseBdev3", 01:24:46.756 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:46.756 "is_configured": false, 01:24:46.756 "data_offset": 0, 01:24:46.756 "data_size": 0 01:24:46.756 }, 01:24:46.756 { 01:24:46.756 "name": "BaseBdev4", 01:24:46.756 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:46.756 "is_configured": false, 01:24:46.756 "data_offset": 0, 01:24:46.756 "data_size": 0 01:24:46.756 } 01:24:46.756 ] 01:24:46.756 }' 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:46.756 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.323 [2024-12-09 05:19:38.763256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:47.323 BaseBdev2 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.323 [ 01:24:47.323 { 01:24:47.323 "name": "BaseBdev2", 01:24:47.323 "aliases": [ 01:24:47.323 "88360106-35ba-4d9d-b490-8eb09533c4e8" 01:24:47.323 ], 01:24:47.323 "product_name": "Malloc disk", 01:24:47.323 "block_size": 512, 01:24:47.323 "num_blocks": 65536, 01:24:47.323 "uuid": "88360106-35ba-4d9d-b490-8eb09533c4e8", 01:24:47.323 "assigned_rate_limits": { 01:24:47.323 "rw_ios_per_sec": 0, 01:24:47.323 "rw_mbytes_per_sec": 0, 01:24:47.323 "r_mbytes_per_sec": 0, 01:24:47.323 "w_mbytes_per_sec": 0 01:24:47.323 }, 01:24:47.323 "claimed": true, 01:24:47.323 "claim_type": "exclusive_write", 01:24:47.323 "zoned": false, 01:24:47.323 "supported_io_types": { 01:24:47.323 "read": true, 01:24:47.323 "write": true, 01:24:47.323 "unmap": true, 01:24:47.323 "flush": true, 01:24:47.323 "reset": true, 01:24:47.323 "nvme_admin": false, 01:24:47.323 "nvme_io": false, 01:24:47.323 "nvme_io_md": false, 01:24:47.323 "write_zeroes": true, 01:24:47.323 "zcopy": true, 01:24:47.323 "get_zone_info": false, 01:24:47.323 "zone_management": false, 01:24:47.323 "zone_append": false, 01:24:47.323 "compare": false, 01:24:47.323 "compare_and_write": false, 01:24:47.323 "abort": true, 01:24:47.323 "seek_hole": false, 01:24:47.323 "seek_data": false, 01:24:47.323 "copy": true, 01:24:47.323 "nvme_iov_md": false 01:24:47.323 }, 01:24:47.323 "memory_domains": [ 01:24:47.323 { 01:24:47.323 "dma_device_id": "system", 01:24:47.323 "dma_device_type": 1 01:24:47.323 }, 01:24:47.323 { 01:24:47.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:47.323 "dma_device_type": 2 01:24:47.323 } 01:24:47.323 ], 01:24:47.323 "driver_specific": {} 01:24:47.323 } 01:24:47.323 ] 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:47.323 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:47.324 "name": "Existed_Raid", 01:24:47.324 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:47.324 "strip_size_kb": 64, 01:24:47.324 "state": "configuring", 01:24:47.324 "raid_level": "raid0", 01:24:47.324 "superblock": true, 01:24:47.324 "num_base_bdevs": 4, 01:24:47.324 "num_base_bdevs_discovered": 2, 01:24:47.324 "num_base_bdevs_operational": 4, 01:24:47.324 "base_bdevs_list": [ 01:24:47.324 { 01:24:47.324 "name": "BaseBdev1", 01:24:47.324 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:47.324 "is_configured": true, 01:24:47.324 "data_offset": 2048, 01:24:47.324 "data_size": 63488 01:24:47.324 }, 01:24:47.324 { 01:24:47.324 "name": "BaseBdev2", 01:24:47.324 "uuid": "88360106-35ba-4d9d-b490-8eb09533c4e8", 01:24:47.324 "is_configured": true, 01:24:47.324 "data_offset": 2048, 01:24:47.324 "data_size": 63488 01:24:47.324 }, 01:24:47.324 { 01:24:47.324 "name": "BaseBdev3", 01:24:47.324 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:47.324 "is_configured": false, 01:24:47.324 "data_offset": 0, 01:24:47.324 "data_size": 0 01:24:47.324 }, 01:24:47.324 { 01:24:47.324 "name": "BaseBdev4", 01:24:47.324 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:47.324 "is_configured": false, 01:24:47.324 "data_offset": 0, 01:24:47.324 "data_size": 0 01:24:47.324 } 01:24:47.324 ] 01:24:47.324 }' 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:47.324 05:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.891 [2024-12-09 05:19:39.387491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:47.891 BaseBdev3 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.891 [ 01:24:47.891 { 01:24:47.891 "name": "BaseBdev3", 01:24:47.891 "aliases": [ 01:24:47.891 "80176a89-8f7c-4da7-a967-a2ce99bcc7bc" 01:24:47.891 ], 01:24:47.891 "product_name": "Malloc disk", 01:24:47.891 "block_size": 512, 01:24:47.891 "num_blocks": 65536, 01:24:47.891 "uuid": "80176a89-8f7c-4da7-a967-a2ce99bcc7bc", 01:24:47.891 "assigned_rate_limits": { 01:24:47.891 "rw_ios_per_sec": 0, 01:24:47.891 "rw_mbytes_per_sec": 0, 01:24:47.891 "r_mbytes_per_sec": 0, 01:24:47.891 "w_mbytes_per_sec": 0 01:24:47.891 }, 01:24:47.891 "claimed": true, 01:24:47.891 "claim_type": "exclusive_write", 01:24:47.891 "zoned": false, 01:24:47.891 "supported_io_types": { 01:24:47.891 "read": true, 01:24:47.891 "write": true, 01:24:47.891 "unmap": true, 01:24:47.891 "flush": true, 01:24:47.891 "reset": true, 01:24:47.891 "nvme_admin": false, 01:24:47.891 "nvme_io": false, 01:24:47.891 "nvme_io_md": false, 01:24:47.891 "write_zeroes": true, 01:24:47.891 "zcopy": true, 01:24:47.891 "get_zone_info": false, 01:24:47.891 "zone_management": false, 01:24:47.891 "zone_append": false, 01:24:47.891 "compare": false, 01:24:47.891 "compare_and_write": false, 01:24:47.891 "abort": true, 01:24:47.891 "seek_hole": false, 01:24:47.891 "seek_data": false, 01:24:47.891 "copy": true, 01:24:47.891 "nvme_iov_md": false 01:24:47.891 }, 01:24:47.891 "memory_domains": [ 01:24:47.891 { 01:24:47.891 "dma_device_id": "system", 01:24:47.891 "dma_device_type": 1 01:24:47.891 }, 01:24:47.891 { 01:24:47.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:47.891 "dma_device_type": 2 01:24:47.891 } 01:24:47.891 ], 01:24:47.891 "driver_specific": {} 01:24:47.891 } 01:24:47.891 ] 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:47.891 "name": "Existed_Raid", 01:24:47.891 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:47.891 "strip_size_kb": 64, 01:24:47.891 "state": "configuring", 01:24:47.891 "raid_level": "raid0", 01:24:47.891 "superblock": true, 01:24:47.891 "num_base_bdevs": 4, 01:24:47.891 "num_base_bdevs_discovered": 3, 01:24:47.891 "num_base_bdevs_operational": 4, 01:24:47.891 "base_bdevs_list": [ 01:24:47.891 { 01:24:47.891 "name": "BaseBdev1", 01:24:47.891 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:47.891 "is_configured": true, 01:24:47.891 "data_offset": 2048, 01:24:47.891 "data_size": 63488 01:24:47.891 }, 01:24:47.891 { 01:24:47.891 "name": "BaseBdev2", 01:24:47.891 "uuid": "88360106-35ba-4d9d-b490-8eb09533c4e8", 01:24:47.891 "is_configured": true, 01:24:47.891 "data_offset": 2048, 01:24:47.891 "data_size": 63488 01:24:47.891 }, 01:24:47.891 { 01:24:47.891 "name": "BaseBdev3", 01:24:47.891 "uuid": "80176a89-8f7c-4da7-a967-a2ce99bcc7bc", 01:24:47.891 "is_configured": true, 01:24:47.891 "data_offset": 2048, 01:24:47.891 "data_size": 63488 01:24:47.891 }, 01:24:47.891 { 01:24:47.891 "name": "BaseBdev4", 01:24:47.891 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:47.891 "is_configured": false, 01:24:47.891 "data_offset": 0, 01:24:47.891 "data_size": 0 01:24:47.891 } 01:24:47.891 ] 01:24:47.891 }' 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:47.891 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.457 05:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:24:48.457 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:48.457 05:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.457 [2024-12-09 05:19:40.003599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:24:48.457 [2024-12-09 05:19:40.003970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:24:48.457 [2024-12-09 05:19:40.004006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:24:48.457 BaseBdev4 01:24:48.457 [2024-12-09 05:19:40.004402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:24:48.457 [2024-12-09 05:19:40.004635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:24:48.457 [2024-12-09 05:19:40.004659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:24:48.457 [2024-12-09 05:19:40.004858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.457 [ 01:24:48.457 { 01:24:48.457 "name": "BaseBdev4", 01:24:48.457 "aliases": [ 01:24:48.457 "b7b11a21-5432-43ed-838d-016e259f389a" 01:24:48.457 ], 01:24:48.457 "product_name": "Malloc disk", 01:24:48.457 "block_size": 512, 01:24:48.457 "num_blocks": 65536, 01:24:48.457 "uuid": "b7b11a21-5432-43ed-838d-016e259f389a", 01:24:48.457 "assigned_rate_limits": { 01:24:48.457 "rw_ios_per_sec": 0, 01:24:48.457 "rw_mbytes_per_sec": 0, 01:24:48.457 "r_mbytes_per_sec": 0, 01:24:48.457 "w_mbytes_per_sec": 0 01:24:48.457 }, 01:24:48.457 "claimed": true, 01:24:48.457 "claim_type": "exclusive_write", 01:24:48.457 "zoned": false, 01:24:48.457 "supported_io_types": { 01:24:48.457 "read": true, 01:24:48.457 "write": true, 01:24:48.457 "unmap": true, 01:24:48.457 "flush": true, 01:24:48.457 "reset": true, 01:24:48.457 "nvme_admin": false, 01:24:48.457 "nvme_io": false, 01:24:48.457 "nvme_io_md": false, 01:24:48.457 "write_zeroes": true, 01:24:48.457 "zcopy": true, 01:24:48.457 "get_zone_info": false, 01:24:48.457 "zone_management": false, 01:24:48.457 "zone_append": false, 01:24:48.457 "compare": false, 01:24:48.457 "compare_and_write": false, 01:24:48.457 "abort": true, 01:24:48.457 "seek_hole": false, 01:24:48.457 "seek_data": false, 01:24:48.457 "copy": true, 01:24:48.457 "nvme_iov_md": false 01:24:48.457 }, 01:24:48.457 "memory_domains": [ 01:24:48.457 { 01:24:48.457 "dma_device_id": "system", 01:24:48.457 "dma_device_type": 1 01:24:48.457 }, 01:24:48.457 { 01:24:48.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:48.457 "dma_device_type": 2 01:24:48.457 } 01:24:48.457 ], 01:24:48.457 "driver_specific": {} 01:24:48.457 } 01:24:48.457 ] 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.457 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:48.715 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:48.715 "name": "Existed_Raid", 01:24:48.715 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:48.715 "strip_size_kb": 64, 01:24:48.715 "state": "online", 01:24:48.715 "raid_level": "raid0", 01:24:48.715 "superblock": true, 01:24:48.715 "num_base_bdevs": 4, 01:24:48.715 "num_base_bdevs_discovered": 4, 01:24:48.715 "num_base_bdevs_operational": 4, 01:24:48.715 "base_bdevs_list": [ 01:24:48.715 { 01:24:48.715 "name": "BaseBdev1", 01:24:48.715 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:48.715 "is_configured": true, 01:24:48.715 "data_offset": 2048, 01:24:48.715 "data_size": 63488 01:24:48.715 }, 01:24:48.715 { 01:24:48.715 "name": "BaseBdev2", 01:24:48.715 "uuid": "88360106-35ba-4d9d-b490-8eb09533c4e8", 01:24:48.715 "is_configured": true, 01:24:48.715 "data_offset": 2048, 01:24:48.715 "data_size": 63488 01:24:48.715 }, 01:24:48.715 { 01:24:48.715 "name": "BaseBdev3", 01:24:48.715 "uuid": "80176a89-8f7c-4da7-a967-a2ce99bcc7bc", 01:24:48.715 "is_configured": true, 01:24:48.715 "data_offset": 2048, 01:24:48.715 "data_size": 63488 01:24:48.715 }, 01:24:48.715 { 01:24:48.715 "name": "BaseBdev4", 01:24:48.715 "uuid": "b7b11a21-5432-43ed-838d-016e259f389a", 01:24:48.715 "is_configured": true, 01:24:48.715 "data_offset": 2048, 01:24:48.715 "data_size": 63488 01:24:48.715 } 01:24:48.715 ] 01:24:48.715 }' 01:24:48.715 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:48.716 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:48.973 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:48.974 [2024-12-09 05:19:40.560295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:48.974 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.232 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:49.232 "name": "Existed_Raid", 01:24:49.232 "aliases": [ 01:24:49.232 "8eafd01c-7c84-4e32-ab76-f7914b3976e9" 01:24:49.232 ], 01:24:49.232 "product_name": "Raid Volume", 01:24:49.232 "block_size": 512, 01:24:49.232 "num_blocks": 253952, 01:24:49.232 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:49.232 "assigned_rate_limits": { 01:24:49.232 "rw_ios_per_sec": 0, 01:24:49.232 "rw_mbytes_per_sec": 0, 01:24:49.232 "r_mbytes_per_sec": 0, 01:24:49.232 "w_mbytes_per_sec": 0 01:24:49.232 }, 01:24:49.232 "claimed": false, 01:24:49.232 "zoned": false, 01:24:49.232 "supported_io_types": { 01:24:49.232 "read": true, 01:24:49.232 "write": true, 01:24:49.232 "unmap": true, 01:24:49.232 "flush": true, 01:24:49.232 "reset": true, 01:24:49.232 "nvme_admin": false, 01:24:49.232 "nvme_io": false, 01:24:49.232 "nvme_io_md": false, 01:24:49.232 "write_zeroes": true, 01:24:49.232 "zcopy": false, 01:24:49.232 "get_zone_info": false, 01:24:49.232 "zone_management": false, 01:24:49.232 "zone_append": false, 01:24:49.232 "compare": false, 01:24:49.232 "compare_and_write": false, 01:24:49.232 "abort": false, 01:24:49.232 "seek_hole": false, 01:24:49.232 "seek_data": false, 01:24:49.232 "copy": false, 01:24:49.232 "nvme_iov_md": false 01:24:49.232 }, 01:24:49.232 "memory_domains": [ 01:24:49.232 { 01:24:49.232 "dma_device_id": "system", 01:24:49.232 "dma_device_type": 1 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:49.232 "dma_device_type": 2 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "system", 01:24:49.232 "dma_device_type": 1 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:49.232 "dma_device_type": 2 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "system", 01:24:49.232 "dma_device_type": 1 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:49.232 "dma_device_type": 2 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "system", 01:24:49.232 "dma_device_type": 1 01:24:49.232 }, 01:24:49.232 { 01:24:49.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:49.232 "dma_device_type": 2 01:24:49.232 } 01:24:49.232 ], 01:24:49.232 "driver_specific": { 01:24:49.232 "raid": { 01:24:49.232 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:49.232 "strip_size_kb": 64, 01:24:49.232 "state": "online", 01:24:49.233 "raid_level": "raid0", 01:24:49.233 "superblock": true, 01:24:49.233 "num_base_bdevs": 4, 01:24:49.233 "num_base_bdevs_discovered": 4, 01:24:49.233 "num_base_bdevs_operational": 4, 01:24:49.233 "base_bdevs_list": [ 01:24:49.233 { 01:24:49.233 "name": "BaseBdev1", 01:24:49.233 "uuid": "a7bed46f-2507-43dd-a1a1-9e70208c0373", 01:24:49.233 "is_configured": true, 01:24:49.233 "data_offset": 2048, 01:24:49.233 "data_size": 63488 01:24:49.233 }, 01:24:49.233 { 01:24:49.233 "name": "BaseBdev2", 01:24:49.233 "uuid": "88360106-35ba-4d9d-b490-8eb09533c4e8", 01:24:49.233 "is_configured": true, 01:24:49.233 "data_offset": 2048, 01:24:49.233 "data_size": 63488 01:24:49.233 }, 01:24:49.233 { 01:24:49.233 "name": "BaseBdev3", 01:24:49.233 "uuid": "80176a89-8f7c-4da7-a967-a2ce99bcc7bc", 01:24:49.233 "is_configured": true, 01:24:49.233 "data_offset": 2048, 01:24:49.233 "data_size": 63488 01:24:49.233 }, 01:24:49.233 { 01:24:49.233 "name": "BaseBdev4", 01:24:49.233 "uuid": "b7b11a21-5432-43ed-838d-016e259f389a", 01:24:49.233 "is_configured": true, 01:24:49.233 "data_offset": 2048, 01:24:49.233 "data_size": 63488 01:24:49.233 } 01:24:49.233 ] 01:24:49.233 } 01:24:49.233 } 01:24:49.233 }' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:24:49.233 BaseBdev2 01:24:49.233 BaseBdev3 01:24:49.233 BaseBdev4' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:49.233 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:49.492 05:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:49.492 [2024-12-09 05:19:40.932079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:49.492 [2024-12-09 05:19:40.932122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:49.492 [2024-12-09 05:19:40.932206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:49.492 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:49.493 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:49.493 "name": "Existed_Raid", 01:24:49.493 "uuid": "8eafd01c-7c84-4e32-ab76-f7914b3976e9", 01:24:49.493 "strip_size_kb": 64, 01:24:49.493 "state": "offline", 01:24:49.493 "raid_level": "raid0", 01:24:49.493 "superblock": true, 01:24:49.493 "num_base_bdevs": 4, 01:24:49.493 "num_base_bdevs_discovered": 3, 01:24:49.493 "num_base_bdevs_operational": 3, 01:24:49.493 "base_bdevs_list": [ 01:24:49.493 { 01:24:49.493 "name": null, 01:24:49.493 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:49.493 "is_configured": false, 01:24:49.493 "data_offset": 0, 01:24:49.493 "data_size": 63488 01:24:49.493 }, 01:24:49.493 { 01:24:49.493 "name": "BaseBdev2", 01:24:49.493 "uuid": "88360106-35ba-4d9d-b490-8eb09533c4e8", 01:24:49.493 "is_configured": true, 01:24:49.493 "data_offset": 2048, 01:24:49.493 "data_size": 63488 01:24:49.493 }, 01:24:49.493 { 01:24:49.493 "name": "BaseBdev3", 01:24:49.493 "uuid": "80176a89-8f7c-4da7-a967-a2ce99bcc7bc", 01:24:49.493 "is_configured": true, 01:24:49.493 "data_offset": 2048, 01:24:49.493 "data_size": 63488 01:24:49.493 }, 01:24:49.493 { 01:24:49.493 "name": "BaseBdev4", 01:24:49.493 "uuid": "b7b11a21-5432-43ed-838d-016e259f389a", 01:24:49.493 "is_configured": true, 01:24:49.493 "data_offset": 2048, 01:24:49.493 "data_size": 63488 01:24:49.493 } 01:24:49.493 ] 01:24:49.493 }' 01:24:49.493 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:49.493 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.059 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.059 [2024-12-09 05:19:41.605472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:24:50.317 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.317 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:50.317 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:50.317 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:50.317 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.317 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.318 [2024-12-09 05:19:41.761543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.318 05:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.318 [2024-12-09 05:19:41.922315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:24:50.318 [2024-12-09 05:19:41.922413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.576 BaseBdev2 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.576 [ 01:24:50.576 { 01:24:50.576 "name": "BaseBdev2", 01:24:50.576 "aliases": [ 01:24:50.576 "12842e77-a923-4b3e-b109-820ae0962d86" 01:24:50.576 ], 01:24:50.576 "product_name": "Malloc disk", 01:24:50.576 "block_size": 512, 01:24:50.576 "num_blocks": 65536, 01:24:50.576 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:50.576 "assigned_rate_limits": { 01:24:50.576 "rw_ios_per_sec": 0, 01:24:50.576 "rw_mbytes_per_sec": 0, 01:24:50.576 "r_mbytes_per_sec": 0, 01:24:50.576 "w_mbytes_per_sec": 0 01:24:50.576 }, 01:24:50.576 "claimed": false, 01:24:50.576 "zoned": false, 01:24:50.576 "supported_io_types": { 01:24:50.576 "read": true, 01:24:50.576 "write": true, 01:24:50.576 "unmap": true, 01:24:50.576 "flush": true, 01:24:50.576 "reset": true, 01:24:50.576 "nvme_admin": false, 01:24:50.576 "nvme_io": false, 01:24:50.576 "nvme_io_md": false, 01:24:50.576 "write_zeroes": true, 01:24:50.576 "zcopy": true, 01:24:50.576 "get_zone_info": false, 01:24:50.576 "zone_management": false, 01:24:50.576 "zone_append": false, 01:24:50.576 "compare": false, 01:24:50.576 "compare_and_write": false, 01:24:50.576 "abort": true, 01:24:50.576 "seek_hole": false, 01:24:50.576 "seek_data": false, 01:24:50.576 "copy": true, 01:24:50.576 "nvme_iov_md": false 01:24:50.576 }, 01:24:50.576 "memory_domains": [ 01:24:50.576 { 01:24:50.576 "dma_device_id": "system", 01:24:50.576 "dma_device_type": 1 01:24:50.576 }, 01:24:50.576 { 01:24:50.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:50.576 "dma_device_type": 2 01:24:50.576 } 01:24:50.576 ], 01:24:50.576 "driver_specific": {} 01:24:50.576 } 01:24:50.576 ] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.576 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.835 BaseBdev3 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.835 [ 01:24:50.835 { 01:24:50.835 "name": "BaseBdev3", 01:24:50.835 "aliases": [ 01:24:50.835 "8bf89fec-2b08-48b3-a70f-f0f449bd3700" 01:24:50.835 ], 01:24:50.835 "product_name": "Malloc disk", 01:24:50.835 "block_size": 512, 01:24:50.835 "num_blocks": 65536, 01:24:50.835 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:50.835 "assigned_rate_limits": { 01:24:50.835 "rw_ios_per_sec": 0, 01:24:50.835 "rw_mbytes_per_sec": 0, 01:24:50.835 "r_mbytes_per_sec": 0, 01:24:50.835 "w_mbytes_per_sec": 0 01:24:50.835 }, 01:24:50.835 "claimed": false, 01:24:50.835 "zoned": false, 01:24:50.835 "supported_io_types": { 01:24:50.835 "read": true, 01:24:50.835 "write": true, 01:24:50.835 "unmap": true, 01:24:50.835 "flush": true, 01:24:50.835 "reset": true, 01:24:50.835 "nvme_admin": false, 01:24:50.835 "nvme_io": false, 01:24:50.835 "nvme_io_md": false, 01:24:50.835 "write_zeroes": true, 01:24:50.835 "zcopy": true, 01:24:50.835 "get_zone_info": false, 01:24:50.835 "zone_management": false, 01:24:50.835 "zone_append": false, 01:24:50.835 "compare": false, 01:24:50.835 "compare_and_write": false, 01:24:50.835 "abort": true, 01:24:50.835 "seek_hole": false, 01:24:50.835 "seek_data": false, 01:24:50.835 "copy": true, 01:24:50.835 "nvme_iov_md": false 01:24:50.835 }, 01:24:50.835 "memory_domains": [ 01:24:50.835 { 01:24:50.835 "dma_device_id": "system", 01:24:50.835 "dma_device_type": 1 01:24:50.835 }, 01:24:50.835 { 01:24:50.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:50.835 "dma_device_type": 2 01:24:50.835 } 01:24:50.835 ], 01:24:50.835 "driver_specific": {} 01:24:50.835 } 01:24:50.835 ] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.835 BaseBdev4 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.835 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.835 [ 01:24:50.835 { 01:24:50.835 "name": "BaseBdev4", 01:24:50.835 "aliases": [ 01:24:50.835 "3a21c799-b0fb-4e2e-959e-43bea064809c" 01:24:50.835 ], 01:24:50.835 "product_name": "Malloc disk", 01:24:50.835 "block_size": 512, 01:24:50.835 "num_blocks": 65536, 01:24:50.835 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:50.835 "assigned_rate_limits": { 01:24:50.835 "rw_ios_per_sec": 0, 01:24:50.835 "rw_mbytes_per_sec": 0, 01:24:50.835 "r_mbytes_per_sec": 0, 01:24:50.835 "w_mbytes_per_sec": 0 01:24:50.835 }, 01:24:50.835 "claimed": false, 01:24:50.835 "zoned": false, 01:24:50.835 "supported_io_types": { 01:24:50.836 "read": true, 01:24:50.836 "write": true, 01:24:50.836 "unmap": true, 01:24:50.836 "flush": true, 01:24:50.836 "reset": true, 01:24:50.836 "nvme_admin": false, 01:24:50.836 "nvme_io": false, 01:24:50.836 "nvme_io_md": false, 01:24:50.836 "write_zeroes": true, 01:24:50.836 "zcopy": true, 01:24:50.836 "get_zone_info": false, 01:24:50.836 "zone_management": false, 01:24:50.836 "zone_append": false, 01:24:50.836 "compare": false, 01:24:50.836 "compare_and_write": false, 01:24:50.836 "abort": true, 01:24:50.836 "seek_hole": false, 01:24:50.836 "seek_data": false, 01:24:50.836 "copy": true, 01:24:50.836 "nvme_iov_md": false 01:24:50.836 }, 01:24:50.836 "memory_domains": [ 01:24:50.836 { 01:24:50.836 "dma_device_id": "system", 01:24:50.836 "dma_device_type": 1 01:24:50.836 }, 01:24:50.836 { 01:24:50.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:50.836 "dma_device_type": 2 01:24:50.836 } 01:24:50.836 ], 01:24:50.836 "driver_specific": {} 01:24:50.836 } 01:24:50.836 ] 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.836 [2024-12-09 05:19:42.323761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:24:50.836 [2024-12-09 05:19:42.323961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:24:50.836 [2024-12-09 05:19:42.324135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:50.836 [2024-12-09 05:19:42.326849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:50.836 [2024-12-09 05:19:42.327113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:50.836 "name": "Existed_Raid", 01:24:50.836 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:50.836 "strip_size_kb": 64, 01:24:50.836 "state": "configuring", 01:24:50.836 "raid_level": "raid0", 01:24:50.836 "superblock": true, 01:24:50.836 "num_base_bdevs": 4, 01:24:50.836 "num_base_bdevs_discovered": 3, 01:24:50.836 "num_base_bdevs_operational": 4, 01:24:50.836 "base_bdevs_list": [ 01:24:50.836 { 01:24:50.836 "name": "BaseBdev1", 01:24:50.836 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:50.836 "is_configured": false, 01:24:50.836 "data_offset": 0, 01:24:50.836 "data_size": 0 01:24:50.836 }, 01:24:50.836 { 01:24:50.836 "name": "BaseBdev2", 01:24:50.836 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:50.836 "is_configured": true, 01:24:50.836 "data_offset": 2048, 01:24:50.836 "data_size": 63488 01:24:50.836 }, 01:24:50.836 { 01:24:50.836 "name": "BaseBdev3", 01:24:50.836 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:50.836 "is_configured": true, 01:24:50.836 "data_offset": 2048, 01:24:50.836 "data_size": 63488 01:24:50.836 }, 01:24:50.836 { 01:24:50.836 "name": "BaseBdev4", 01:24:50.836 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:50.836 "is_configured": true, 01:24:50.836 "data_offset": 2048, 01:24:50.836 "data_size": 63488 01:24:50.836 } 01:24:50.836 ] 01:24:50.836 }' 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:50.836 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.403 [2024-12-09 05:19:42.852036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.403 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:51.403 "name": "Existed_Raid", 01:24:51.403 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:51.403 "strip_size_kb": 64, 01:24:51.403 "state": "configuring", 01:24:51.403 "raid_level": "raid0", 01:24:51.403 "superblock": true, 01:24:51.403 "num_base_bdevs": 4, 01:24:51.403 "num_base_bdevs_discovered": 2, 01:24:51.403 "num_base_bdevs_operational": 4, 01:24:51.403 "base_bdevs_list": [ 01:24:51.403 { 01:24:51.403 "name": "BaseBdev1", 01:24:51.403 "uuid": "00000000-0000-0000-0000-000000000000", 01:24:51.403 "is_configured": false, 01:24:51.403 "data_offset": 0, 01:24:51.403 "data_size": 0 01:24:51.403 }, 01:24:51.403 { 01:24:51.403 "name": null, 01:24:51.403 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:51.403 "is_configured": false, 01:24:51.403 "data_offset": 0, 01:24:51.403 "data_size": 63488 01:24:51.403 }, 01:24:51.404 { 01:24:51.404 "name": "BaseBdev3", 01:24:51.404 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:51.404 "is_configured": true, 01:24:51.404 "data_offset": 2048, 01:24:51.404 "data_size": 63488 01:24:51.404 }, 01:24:51.404 { 01:24:51.404 "name": "BaseBdev4", 01:24:51.404 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:51.404 "is_configured": true, 01:24:51.404 "data_offset": 2048, 01:24:51.404 "data_size": 63488 01:24:51.404 } 01:24:51.404 ] 01:24:51.404 }' 01:24:51.404 05:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:51.404 05:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.971 [2024-12-09 05:19:43.475486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:24:51.971 BaseBdev1 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.971 [ 01:24:51.971 { 01:24:51.971 "name": "BaseBdev1", 01:24:51.971 "aliases": [ 01:24:51.971 "afe6e550-e929-48ba-912c-5dfb8538cb72" 01:24:51.971 ], 01:24:51.971 "product_name": "Malloc disk", 01:24:51.971 "block_size": 512, 01:24:51.971 "num_blocks": 65536, 01:24:51.971 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:51.971 "assigned_rate_limits": { 01:24:51.971 "rw_ios_per_sec": 0, 01:24:51.971 "rw_mbytes_per_sec": 0, 01:24:51.971 "r_mbytes_per_sec": 0, 01:24:51.971 "w_mbytes_per_sec": 0 01:24:51.971 }, 01:24:51.971 "claimed": true, 01:24:51.971 "claim_type": "exclusive_write", 01:24:51.971 "zoned": false, 01:24:51.971 "supported_io_types": { 01:24:51.971 "read": true, 01:24:51.971 "write": true, 01:24:51.971 "unmap": true, 01:24:51.971 "flush": true, 01:24:51.971 "reset": true, 01:24:51.971 "nvme_admin": false, 01:24:51.971 "nvme_io": false, 01:24:51.971 "nvme_io_md": false, 01:24:51.971 "write_zeroes": true, 01:24:51.971 "zcopy": true, 01:24:51.971 "get_zone_info": false, 01:24:51.971 "zone_management": false, 01:24:51.971 "zone_append": false, 01:24:51.971 "compare": false, 01:24:51.971 "compare_and_write": false, 01:24:51.971 "abort": true, 01:24:51.971 "seek_hole": false, 01:24:51.971 "seek_data": false, 01:24:51.971 "copy": true, 01:24:51.971 "nvme_iov_md": false 01:24:51.971 }, 01:24:51.971 "memory_domains": [ 01:24:51.971 { 01:24:51.971 "dma_device_id": "system", 01:24:51.971 "dma_device_type": 1 01:24:51.971 }, 01:24:51.971 { 01:24:51.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:51.971 "dma_device_type": 2 01:24:51.971 } 01:24:51.971 ], 01:24:51.971 "driver_specific": {} 01:24:51.971 } 01:24:51.971 ] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:51.971 "name": "Existed_Raid", 01:24:51.971 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:51.971 "strip_size_kb": 64, 01:24:51.971 "state": "configuring", 01:24:51.971 "raid_level": "raid0", 01:24:51.971 "superblock": true, 01:24:51.971 "num_base_bdevs": 4, 01:24:51.971 "num_base_bdevs_discovered": 3, 01:24:51.971 "num_base_bdevs_operational": 4, 01:24:51.971 "base_bdevs_list": [ 01:24:51.971 { 01:24:51.971 "name": "BaseBdev1", 01:24:51.971 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:51.971 "is_configured": true, 01:24:51.971 "data_offset": 2048, 01:24:51.971 "data_size": 63488 01:24:51.971 }, 01:24:51.971 { 01:24:51.971 "name": null, 01:24:51.971 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:51.971 "is_configured": false, 01:24:51.971 "data_offset": 0, 01:24:51.971 "data_size": 63488 01:24:51.971 }, 01:24:51.971 { 01:24:51.971 "name": "BaseBdev3", 01:24:51.971 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:51.971 "is_configured": true, 01:24:51.971 "data_offset": 2048, 01:24:51.971 "data_size": 63488 01:24:51.971 }, 01:24:51.971 { 01:24:51.971 "name": "BaseBdev4", 01:24:51.971 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:51.971 "is_configured": true, 01:24:51.971 "data_offset": 2048, 01:24:51.971 "data_size": 63488 01:24:51.971 } 01:24:51.971 ] 01:24:51.971 }' 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:51.971 05:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:24:52.636 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:52.637 [2024-12-09 05:19:44.095971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:52.637 "name": "Existed_Raid", 01:24:52.637 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:52.637 "strip_size_kb": 64, 01:24:52.637 "state": "configuring", 01:24:52.637 "raid_level": "raid0", 01:24:52.637 "superblock": true, 01:24:52.637 "num_base_bdevs": 4, 01:24:52.637 "num_base_bdevs_discovered": 2, 01:24:52.637 "num_base_bdevs_operational": 4, 01:24:52.637 "base_bdevs_list": [ 01:24:52.637 { 01:24:52.637 "name": "BaseBdev1", 01:24:52.637 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:52.637 "is_configured": true, 01:24:52.637 "data_offset": 2048, 01:24:52.637 "data_size": 63488 01:24:52.637 }, 01:24:52.637 { 01:24:52.637 "name": null, 01:24:52.637 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:52.637 "is_configured": false, 01:24:52.637 "data_offset": 0, 01:24:52.637 "data_size": 63488 01:24:52.637 }, 01:24:52.637 { 01:24:52.637 "name": null, 01:24:52.637 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:52.637 "is_configured": false, 01:24:52.637 "data_offset": 0, 01:24:52.637 "data_size": 63488 01:24:52.637 }, 01:24:52.637 { 01:24:52.637 "name": "BaseBdev4", 01:24:52.637 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:52.637 "is_configured": true, 01:24:52.637 "data_offset": 2048, 01:24:52.637 "data_size": 63488 01:24:52.637 } 01:24:52.637 ] 01:24:52.637 }' 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:52.637 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.219 [2024-12-09 05:19:44.688186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:53.219 "name": "Existed_Raid", 01:24:53.219 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:53.219 "strip_size_kb": 64, 01:24:53.219 "state": "configuring", 01:24:53.219 "raid_level": "raid0", 01:24:53.219 "superblock": true, 01:24:53.219 "num_base_bdevs": 4, 01:24:53.219 "num_base_bdevs_discovered": 3, 01:24:53.219 "num_base_bdevs_operational": 4, 01:24:53.219 "base_bdevs_list": [ 01:24:53.219 { 01:24:53.219 "name": "BaseBdev1", 01:24:53.219 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:53.219 "is_configured": true, 01:24:53.219 "data_offset": 2048, 01:24:53.219 "data_size": 63488 01:24:53.219 }, 01:24:53.219 { 01:24:53.219 "name": null, 01:24:53.219 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:53.219 "is_configured": false, 01:24:53.219 "data_offset": 0, 01:24:53.219 "data_size": 63488 01:24:53.219 }, 01:24:53.219 { 01:24:53.219 "name": "BaseBdev3", 01:24:53.219 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:53.219 "is_configured": true, 01:24:53.219 "data_offset": 2048, 01:24:53.219 "data_size": 63488 01:24:53.219 }, 01:24:53.219 { 01:24:53.219 "name": "BaseBdev4", 01:24:53.219 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:53.219 "is_configured": true, 01:24:53.219 "data_offset": 2048, 01:24:53.219 "data_size": 63488 01:24:53.219 } 01:24:53.219 ] 01:24:53.219 }' 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:53.219 05:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:53.786 [2024-12-09 05:19:45.296445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:53.786 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:54.044 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:54.044 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:54.044 "name": "Existed_Raid", 01:24:54.044 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:54.044 "strip_size_kb": 64, 01:24:54.044 "state": "configuring", 01:24:54.044 "raid_level": "raid0", 01:24:54.044 "superblock": true, 01:24:54.044 "num_base_bdevs": 4, 01:24:54.044 "num_base_bdevs_discovered": 2, 01:24:54.044 "num_base_bdevs_operational": 4, 01:24:54.044 "base_bdevs_list": [ 01:24:54.044 { 01:24:54.044 "name": null, 01:24:54.044 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:54.044 "is_configured": false, 01:24:54.044 "data_offset": 0, 01:24:54.044 "data_size": 63488 01:24:54.044 }, 01:24:54.044 { 01:24:54.044 "name": null, 01:24:54.044 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:54.044 "is_configured": false, 01:24:54.044 "data_offset": 0, 01:24:54.044 "data_size": 63488 01:24:54.044 }, 01:24:54.044 { 01:24:54.044 "name": "BaseBdev3", 01:24:54.044 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:54.044 "is_configured": true, 01:24:54.044 "data_offset": 2048, 01:24:54.044 "data_size": 63488 01:24:54.044 }, 01:24:54.044 { 01:24:54.044 "name": "BaseBdev4", 01:24:54.044 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:54.044 "is_configured": true, 01:24:54.044 "data_offset": 2048, 01:24:54.044 "data_size": 63488 01:24:54.044 } 01:24:54.044 ] 01:24:54.044 }' 01:24:54.044 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:54.044 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:54.301 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:54.301 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:24:54.301 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:54.301 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:54.559 [2024-12-09 05:19:45.966510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:54.559 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:54.560 05:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:54.560 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:54.560 "name": "Existed_Raid", 01:24:54.560 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:54.560 "strip_size_kb": 64, 01:24:54.560 "state": "configuring", 01:24:54.560 "raid_level": "raid0", 01:24:54.560 "superblock": true, 01:24:54.560 "num_base_bdevs": 4, 01:24:54.560 "num_base_bdevs_discovered": 3, 01:24:54.560 "num_base_bdevs_operational": 4, 01:24:54.560 "base_bdevs_list": [ 01:24:54.560 { 01:24:54.560 "name": null, 01:24:54.560 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:54.560 "is_configured": false, 01:24:54.560 "data_offset": 0, 01:24:54.560 "data_size": 63488 01:24:54.560 }, 01:24:54.560 { 01:24:54.560 "name": "BaseBdev2", 01:24:54.560 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:54.560 "is_configured": true, 01:24:54.560 "data_offset": 2048, 01:24:54.560 "data_size": 63488 01:24:54.560 }, 01:24:54.560 { 01:24:54.560 "name": "BaseBdev3", 01:24:54.560 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:54.560 "is_configured": true, 01:24:54.560 "data_offset": 2048, 01:24:54.560 "data_size": 63488 01:24:54.560 }, 01:24:54.560 { 01:24:54.560 "name": "BaseBdev4", 01:24:54.560 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:54.560 "is_configured": true, 01:24:54.560 "data_offset": 2048, 01:24:54.560 "data_size": 63488 01:24:54.560 } 01:24:54.560 ] 01:24:54.560 }' 01:24:54.560 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:54.560 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u afe6e550-e929-48ba-912c-5dfb8538cb72 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.126 [2024-12-09 05:19:46.640513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:24:55.126 [2024-12-09 05:19:46.640898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:24:55.126 [2024-12-09 05:19:46.640920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:24:55.126 NewBaseBdev 01:24:55.126 [2024-12-09 05:19:46.641324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:24:55.126 [2024-12-09 05:19:46.641558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:24:55.126 [2024-12-09 05:19:46.641584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:24:55.126 [2024-12-09 05:19:46.641766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.126 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.126 [ 01:24:55.126 { 01:24:55.126 "name": "NewBaseBdev", 01:24:55.126 "aliases": [ 01:24:55.126 "afe6e550-e929-48ba-912c-5dfb8538cb72" 01:24:55.126 ], 01:24:55.126 "product_name": "Malloc disk", 01:24:55.126 "block_size": 512, 01:24:55.126 "num_blocks": 65536, 01:24:55.126 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:55.126 "assigned_rate_limits": { 01:24:55.126 "rw_ios_per_sec": 0, 01:24:55.126 "rw_mbytes_per_sec": 0, 01:24:55.126 "r_mbytes_per_sec": 0, 01:24:55.126 "w_mbytes_per_sec": 0 01:24:55.126 }, 01:24:55.126 "claimed": true, 01:24:55.126 "claim_type": "exclusive_write", 01:24:55.126 "zoned": false, 01:24:55.126 "supported_io_types": { 01:24:55.126 "read": true, 01:24:55.127 "write": true, 01:24:55.127 "unmap": true, 01:24:55.127 "flush": true, 01:24:55.127 "reset": true, 01:24:55.127 "nvme_admin": false, 01:24:55.127 "nvme_io": false, 01:24:55.127 "nvme_io_md": false, 01:24:55.127 "write_zeroes": true, 01:24:55.127 "zcopy": true, 01:24:55.127 "get_zone_info": false, 01:24:55.127 "zone_management": false, 01:24:55.127 "zone_append": false, 01:24:55.127 "compare": false, 01:24:55.127 "compare_and_write": false, 01:24:55.127 "abort": true, 01:24:55.127 "seek_hole": false, 01:24:55.127 "seek_data": false, 01:24:55.127 "copy": true, 01:24:55.127 "nvme_iov_md": false 01:24:55.127 }, 01:24:55.127 "memory_domains": [ 01:24:55.127 { 01:24:55.127 "dma_device_id": "system", 01:24:55.127 "dma_device_type": 1 01:24:55.127 }, 01:24:55.127 { 01:24:55.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:55.127 "dma_device_type": 2 01:24:55.127 } 01:24:55.127 ], 01:24:55.127 "driver_specific": {} 01:24:55.127 } 01:24:55.127 ] 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:55.127 "name": "Existed_Raid", 01:24:55.127 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:55.127 "strip_size_kb": 64, 01:24:55.127 "state": "online", 01:24:55.127 "raid_level": "raid0", 01:24:55.127 "superblock": true, 01:24:55.127 "num_base_bdevs": 4, 01:24:55.127 "num_base_bdevs_discovered": 4, 01:24:55.127 "num_base_bdevs_operational": 4, 01:24:55.127 "base_bdevs_list": [ 01:24:55.127 { 01:24:55.127 "name": "NewBaseBdev", 01:24:55.127 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:55.127 "is_configured": true, 01:24:55.127 "data_offset": 2048, 01:24:55.127 "data_size": 63488 01:24:55.127 }, 01:24:55.127 { 01:24:55.127 "name": "BaseBdev2", 01:24:55.127 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:55.127 "is_configured": true, 01:24:55.127 "data_offset": 2048, 01:24:55.127 "data_size": 63488 01:24:55.127 }, 01:24:55.127 { 01:24:55.127 "name": "BaseBdev3", 01:24:55.127 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:55.127 "is_configured": true, 01:24:55.127 "data_offset": 2048, 01:24:55.127 "data_size": 63488 01:24:55.127 }, 01:24:55.127 { 01:24:55.127 "name": "BaseBdev4", 01:24:55.127 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:55.127 "is_configured": true, 01:24:55.127 "data_offset": 2048, 01:24:55.127 "data_size": 63488 01:24:55.127 } 01:24:55.127 ] 01:24:55.127 }' 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:55.127 05:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.694 [2024-12-09 05:19:47.221411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.694 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:55.694 "name": "Existed_Raid", 01:24:55.694 "aliases": [ 01:24:55.694 "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb" 01:24:55.694 ], 01:24:55.694 "product_name": "Raid Volume", 01:24:55.694 "block_size": 512, 01:24:55.694 "num_blocks": 253952, 01:24:55.694 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:55.694 "assigned_rate_limits": { 01:24:55.694 "rw_ios_per_sec": 0, 01:24:55.694 "rw_mbytes_per_sec": 0, 01:24:55.694 "r_mbytes_per_sec": 0, 01:24:55.694 "w_mbytes_per_sec": 0 01:24:55.694 }, 01:24:55.694 "claimed": false, 01:24:55.694 "zoned": false, 01:24:55.694 "supported_io_types": { 01:24:55.694 "read": true, 01:24:55.694 "write": true, 01:24:55.694 "unmap": true, 01:24:55.694 "flush": true, 01:24:55.694 "reset": true, 01:24:55.694 "nvme_admin": false, 01:24:55.694 "nvme_io": false, 01:24:55.694 "nvme_io_md": false, 01:24:55.694 "write_zeroes": true, 01:24:55.694 "zcopy": false, 01:24:55.694 "get_zone_info": false, 01:24:55.694 "zone_management": false, 01:24:55.694 "zone_append": false, 01:24:55.694 "compare": false, 01:24:55.694 "compare_and_write": false, 01:24:55.694 "abort": false, 01:24:55.694 "seek_hole": false, 01:24:55.694 "seek_data": false, 01:24:55.694 "copy": false, 01:24:55.694 "nvme_iov_md": false 01:24:55.694 }, 01:24:55.694 "memory_domains": [ 01:24:55.694 { 01:24:55.694 "dma_device_id": "system", 01:24:55.694 "dma_device_type": 1 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:55.694 "dma_device_type": 2 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "system", 01:24:55.694 "dma_device_type": 1 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:55.694 "dma_device_type": 2 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "system", 01:24:55.694 "dma_device_type": 1 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:55.694 "dma_device_type": 2 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "system", 01:24:55.694 "dma_device_type": 1 01:24:55.694 }, 01:24:55.694 { 01:24:55.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:55.694 "dma_device_type": 2 01:24:55.694 } 01:24:55.694 ], 01:24:55.694 "driver_specific": { 01:24:55.694 "raid": { 01:24:55.694 "uuid": "0dbe6b5f-16d3-44b2-b7f9-e6e638a3fadb", 01:24:55.694 "strip_size_kb": 64, 01:24:55.694 "state": "online", 01:24:55.694 "raid_level": "raid0", 01:24:55.694 "superblock": true, 01:24:55.694 "num_base_bdevs": 4, 01:24:55.694 "num_base_bdevs_discovered": 4, 01:24:55.694 "num_base_bdevs_operational": 4, 01:24:55.694 "base_bdevs_list": [ 01:24:55.694 { 01:24:55.694 "name": "NewBaseBdev", 01:24:55.694 "uuid": "afe6e550-e929-48ba-912c-5dfb8538cb72", 01:24:55.695 "is_configured": true, 01:24:55.695 "data_offset": 2048, 01:24:55.695 "data_size": 63488 01:24:55.695 }, 01:24:55.695 { 01:24:55.695 "name": "BaseBdev2", 01:24:55.695 "uuid": "12842e77-a923-4b3e-b109-820ae0962d86", 01:24:55.695 "is_configured": true, 01:24:55.695 "data_offset": 2048, 01:24:55.695 "data_size": 63488 01:24:55.695 }, 01:24:55.695 { 01:24:55.695 "name": "BaseBdev3", 01:24:55.695 "uuid": "8bf89fec-2b08-48b3-a70f-f0f449bd3700", 01:24:55.695 "is_configured": true, 01:24:55.695 "data_offset": 2048, 01:24:55.695 "data_size": 63488 01:24:55.695 }, 01:24:55.695 { 01:24:55.695 "name": "BaseBdev4", 01:24:55.695 "uuid": "3a21c799-b0fb-4e2e-959e-43bea064809c", 01:24:55.695 "is_configured": true, 01:24:55.695 "data_offset": 2048, 01:24:55.695 "data_size": 63488 01:24:55.695 } 01:24:55.695 ] 01:24:55.695 } 01:24:55.695 } 01:24:55.695 }' 01:24:55.695 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:24:55.953 BaseBdev2 01:24:55.953 BaseBdev3 01:24:55.953 BaseBdev4' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:55.953 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:56.211 [2024-12-09 05:19:47.580960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:24:56.211 [2024-12-09 05:19:47.581138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:56.211 [2024-12-09 05:19:47.581281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:56.211 [2024-12-09 05:19:47.581399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:56.211 [2024-12-09 05:19:47.581420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70068 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70068 ']' 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70068 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70068 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70068' 01:24:56.211 killing process with pid 70068 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70068 01:24:56.211 [2024-12-09 05:19:47.618843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:24:56.211 05:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70068 01:24:56.469 [2024-12-09 05:19:47.979262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:24:57.842 05:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:24:57.842 ************************************ 01:24:57.842 END TEST raid_state_function_test_sb 01:24:57.842 ************************************ 01:24:57.842 01:24:57.842 real 0m13.212s 01:24:57.842 user 0m21.756s 01:24:57.842 sys 0m1.867s 01:24:57.843 05:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:57.843 05:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:24:57.843 05:19:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 01:24:57.843 05:19:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:24:57.843 05:19:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:57.843 05:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:24:57.843 ************************************ 01:24:57.843 START TEST raid_superblock_test 01:24:57.843 ************************************ 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70755 01:24:57.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70755 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70755 ']' 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:57.843 05:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:57.843 [2024-12-09 05:19:49.321739] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:57.843 [2024-12-09 05:19:49.322989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 01:24:58.101 [2024-12-09 05:19:49.504168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:58.101 [2024-12-09 05:19:49.628828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:58.358 [2024-12-09 05:19:49.831959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:58.358 [2024-12-09 05:19:49.832036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.616 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.875 malloc1 01:24:58.875 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.875 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:24:58.875 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.875 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.875 [2024-12-09 05:19:50.272736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:24:58.875 [2024-12-09 05:19:50.272805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:58.876 [2024-12-09 05:19:50.272839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:24:58.876 [2024-12-09 05:19:50.272856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:58.876 [2024-12-09 05:19:50.275431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:58.876 [2024-12-09 05:19:50.275475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:24:58.876 pt1 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 malloc2 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 [2024-12-09 05:19:50.319294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:24:58.876 [2024-12-09 05:19:50.319366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:58.876 [2024-12-09 05:19:50.319405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:24:58.876 [2024-12-09 05:19:50.319421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:58.876 [2024-12-09 05:19:50.321896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:58.876 [2024-12-09 05:19:50.322096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:24:58.876 pt2 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 malloc3 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 [2024-12-09 05:19:50.384196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:24:58.876 [2024-12-09 05:19:50.384272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:58.876 [2024-12-09 05:19:50.384307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:24:58.876 [2024-12-09 05:19:50.384324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:58.876 [2024-12-09 05:19:50.387317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:58.876 [2024-12-09 05:19:50.387387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:24:58.876 pt3 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 malloc4 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 [2024-12-09 05:19:50.442549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:24:58.876 [2024-12-09 05:19:50.442828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:24:58.876 [2024-12-09 05:19:50.442911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:24:58.876 [2024-12-09 05:19:50.443153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:24:58.876 [2024-12-09 05:19:50.445936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:24:58.876 [2024-12-09 05:19:50.446144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:24:58.876 pt4 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.876 [2024-12-09 05:19:50.454596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:24:58.876 [2024-12-09 05:19:50.457045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:24:58.876 [2024-12-09 05:19:50.457164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:24:58.876 [2024-12-09 05:19:50.457234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:24:58.876 [2024-12-09 05:19:50.457584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:24:58.876 [2024-12-09 05:19:50.457607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:24:58.876 [2024-12-09 05:19:50.457987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:24:58.876 [2024-12-09 05:19:50.458221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:24:58.876 [2024-12-09 05:19:50.458260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:24:58.876 [2024-12-09 05:19:50.458520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:58.876 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:58.877 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.136 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:24:59.136 "name": "raid_bdev1", 01:24:59.136 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:24:59.136 "strip_size_kb": 64, 01:24:59.136 "state": "online", 01:24:59.136 "raid_level": "raid0", 01:24:59.136 "superblock": true, 01:24:59.136 "num_base_bdevs": 4, 01:24:59.136 "num_base_bdevs_discovered": 4, 01:24:59.136 "num_base_bdevs_operational": 4, 01:24:59.136 "base_bdevs_list": [ 01:24:59.136 { 01:24:59.136 "name": "pt1", 01:24:59.136 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:59.136 "is_configured": true, 01:24:59.136 "data_offset": 2048, 01:24:59.136 "data_size": 63488 01:24:59.136 }, 01:24:59.136 { 01:24:59.136 "name": "pt2", 01:24:59.136 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:59.136 "is_configured": true, 01:24:59.136 "data_offset": 2048, 01:24:59.136 "data_size": 63488 01:24:59.136 }, 01:24:59.136 { 01:24:59.136 "name": "pt3", 01:24:59.136 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:59.136 "is_configured": true, 01:24:59.136 "data_offset": 2048, 01:24:59.136 "data_size": 63488 01:24:59.136 }, 01:24:59.136 { 01:24:59.136 "name": "pt4", 01:24:59.136 "uuid": "00000000-0000-0000-0000-000000000004", 01:24:59.136 "is_configured": true, 01:24:59.136 "data_offset": 2048, 01:24:59.136 "data_size": 63488 01:24:59.136 } 01:24:59.136 ] 01:24:59.136 }' 01:24:59.136 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:24:59.136 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:24:59.395 [2024-12-09 05:19:50.971185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:59.395 05:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.653 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:24:59.653 "name": "raid_bdev1", 01:24:59.653 "aliases": [ 01:24:59.653 "de633bf5-a554-4554-a355-751055624650" 01:24:59.653 ], 01:24:59.653 "product_name": "Raid Volume", 01:24:59.653 "block_size": 512, 01:24:59.653 "num_blocks": 253952, 01:24:59.653 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:24:59.653 "assigned_rate_limits": { 01:24:59.653 "rw_ios_per_sec": 0, 01:24:59.653 "rw_mbytes_per_sec": 0, 01:24:59.653 "r_mbytes_per_sec": 0, 01:24:59.653 "w_mbytes_per_sec": 0 01:24:59.653 }, 01:24:59.653 "claimed": false, 01:24:59.653 "zoned": false, 01:24:59.653 "supported_io_types": { 01:24:59.653 "read": true, 01:24:59.653 "write": true, 01:24:59.653 "unmap": true, 01:24:59.653 "flush": true, 01:24:59.653 "reset": true, 01:24:59.653 "nvme_admin": false, 01:24:59.653 "nvme_io": false, 01:24:59.653 "nvme_io_md": false, 01:24:59.653 "write_zeroes": true, 01:24:59.653 "zcopy": false, 01:24:59.653 "get_zone_info": false, 01:24:59.653 "zone_management": false, 01:24:59.653 "zone_append": false, 01:24:59.653 "compare": false, 01:24:59.653 "compare_and_write": false, 01:24:59.653 "abort": false, 01:24:59.653 "seek_hole": false, 01:24:59.653 "seek_data": false, 01:24:59.653 "copy": false, 01:24:59.653 "nvme_iov_md": false 01:24:59.653 }, 01:24:59.653 "memory_domains": [ 01:24:59.653 { 01:24:59.653 "dma_device_id": "system", 01:24:59.653 "dma_device_type": 1 01:24:59.653 }, 01:24:59.653 { 01:24:59.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:59.653 "dma_device_type": 2 01:24:59.653 }, 01:24:59.653 { 01:24:59.653 "dma_device_id": "system", 01:24:59.653 "dma_device_type": 1 01:24:59.653 }, 01:24:59.653 { 01:24:59.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:59.654 "dma_device_type": 2 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "dma_device_id": "system", 01:24:59.654 "dma_device_type": 1 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:59.654 "dma_device_type": 2 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "dma_device_id": "system", 01:24:59.654 "dma_device_type": 1 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:24:59.654 "dma_device_type": 2 01:24:59.654 } 01:24:59.654 ], 01:24:59.654 "driver_specific": { 01:24:59.654 "raid": { 01:24:59.654 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:24:59.654 "strip_size_kb": 64, 01:24:59.654 "state": "online", 01:24:59.654 "raid_level": "raid0", 01:24:59.654 "superblock": true, 01:24:59.654 "num_base_bdevs": 4, 01:24:59.654 "num_base_bdevs_discovered": 4, 01:24:59.654 "num_base_bdevs_operational": 4, 01:24:59.654 "base_bdevs_list": [ 01:24:59.654 { 01:24:59.654 "name": "pt1", 01:24:59.654 "uuid": "00000000-0000-0000-0000-000000000001", 01:24:59.654 "is_configured": true, 01:24:59.654 "data_offset": 2048, 01:24:59.654 "data_size": 63488 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "name": "pt2", 01:24:59.654 "uuid": "00000000-0000-0000-0000-000000000002", 01:24:59.654 "is_configured": true, 01:24:59.654 "data_offset": 2048, 01:24:59.654 "data_size": 63488 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "name": "pt3", 01:24:59.654 "uuid": "00000000-0000-0000-0000-000000000003", 01:24:59.654 "is_configured": true, 01:24:59.654 "data_offset": 2048, 01:24:59.654 "data_size": 63488 01:24:59.654 }, 01:24:59.654 { 01:24:59.654 "name": "pt4", 01:24:59.654 "uuid": "00000000-0000-0000-0000-000000000004", 01:24:59.654 "is_configured": true, 01:24:59.654 "data_offset": 2048, 01:24:59.654 "data_size": 63488 01:24:59.654 } 01:24:59.654 ] 01:24:59.654 } 01:24:59.654 } 01:24:59.654 }' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:24:59.654 pt2 01:24:59.654 pt3 01:24:59.654 pt4' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.654 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 [2024-12-09 05:19:51.331204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de633bf5-a554-4554-a355-751055624650 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de633bf5-a554-4554-a355-751055624650 ']' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 [2024-12-09 05:19:51.378926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:24:59.913 [2024-12-09 05:19:51.378953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:24:59.913 [2024-12-09 05:19:51.379046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:24:59.913 [2024-12-09 05:19:51.379129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:24:59.913 [2024-12-09 05:19:51.379152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:59.913 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.172 [2024-12-09 05:19:51.527002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:25:00.172 [2024-12-09 05:19:51.529636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:25:00.172 [2024-12-09 05:19:51.529704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:25:00.172 [2024-12-09 05:19:51.529767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 01:25:00.172 [2024-12-09 05:19:51.529864] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:25:00.172 [2024-12-09 05:19:51.529959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:25:00.172 [2024-12-09 05:19:51.530000] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:25:00.172 [2024-12-09 05:19:51.530040] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 01:25:00.172 [2024-12-09 05:19:51.530067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:00.172 [2024-12-09 05:19:51.530090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:25:00.172 request: 01:25:00.172 { 01:25:00.172 "name": "raid_bdev1", 01:25:00.172 "raid_level": "raid0", 01:25:00.172 "base_bdevs": [ 01:25:00.172 "malloc1", 01:25:00.172 "malloc2", 01:25:00.172 "malloc3", 01:25:00.172 "malloc4" 01:25:00.172 ], 01:25:00.172 "strip_size_kb": 64, 01:25:00.172 "superblock": false, 01:25:00.172 "method": "bdev_raid_create", 01:25:00.172 "req_id": 1 01:25:00.172 } 01:25:00.172 Got JSON-RPC error response 01:25:00.172 response: 01:25:00.172 { 01:25:00.172 "code": -17, 01:25:00.172 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:25:00.173 } 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.173 [2024-12-09 05:19:51.594982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:25:00.173 [2024-12-09 05:19:51.595195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:00.173 [2024-12-09 05:19:51.595338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:25:00.173 [2024-12-09 05:19:51.595519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:00.173 [2024-12-09 05:19:51.598343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:00.173 [2024-12-09 05:19:51.598579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:25:00.173 [2024-12-09 05:19:51.598828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:25:00.173 [2024-12-09 05:19:51.599004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:25:00.173 pt1 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:00.173 "name": "raid_bdev1", 01:25:00.173 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:25:00.173 "strip_size_kb": 64, 01:25:00.173 "state": "configuring", 01:25:00.173 "raid_level": "raid0", 01:25:00.173 "superblock": true, 01:25:00.173 "num_base_bdevs": 4, 01:25:00.173 "num_base_bdevs_discovered": 1, 01:25:00.173 "num_base_bdevs_operational": 4, 01:25:00.173 "base_bdevs_list": [ 01:25:00.173 { 01:25:00.173 "name": "pt1", 01:25:00.173 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:00.173 "is_configured": true, 01:25:00.173 "data_offset": 2048, 01:25:00.173 "data_size": 63488 01:25:00.173 }, 01:25:00.173 { 01:25:00.173 "name": null, 01:25:00.173 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:00.173 "is_configured": false, 01:25:00.173 "data_offset": 2048, 01:25:00.173 "data_size": 63488 01:25:00.173 }, 01:25:00.173 { 01:25:00.173 "name": null, 01:25:00.173 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:00.173 "is_configured": false, 01:25:00.173 "data_offset": 2048, 01:25:00.173 "data_size": 63488 01:25:00.173 }, 01:25:00.173 { 01:25:00.173 "name": null, 01:25:00.173 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:00.173 "is_configured": false, 01:25:00.173 "data_offset": 2048, 01:25:00.173 "data_size": 63488 01:25:00.173 } 01:25:00.173 ] 01:25:00.173 }' 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:00.173 05:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.739 [2024-12-09 05:19:52.119521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:25:00.739 [2024-12-09 05:19:52.119652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:00.739 [2024-12-09 05:19:52.119696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:25:00.739 [2024-12-09 05:19:52.119724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:00.739 [2024-12-09 05:19:52.120440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:00.739 [2024-12-09 05:19:52.120513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:25:00.739 [2024-12-09 05:19:52.120642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:25:00.739 [2024-12-09 05:19:52.120872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:25:00.739 pt2 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.739 [2024-12-09 05:19:52.127480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:00.739 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:00.739 "name": "raid_bdev1", 01:25:00.739 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:25:00.739 "strip_size_kb": 64, 01:25:00.739 "state": "configuring", 01:25:00.739 "raid_level": "raid0", 01:25:00.739 "superblock": true, 01:25:00.739 "num_base_bdevs": 4, 01:25:00.739 "num_base_bdevs_discovered": 1, 01:25:00.739 "num_base_bdevs_operational": 4, 01:25:00.739 "base_bdevs_list": [ 01:25:00.739 { 01:25:00.739 "name": "pt1", 01:25:00.739 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:00.739 "is_configured": true, 01:25:00.739 "data_offset": 2048, 01:25:00.739 "data_size": 63488 01:25:00.739 }, 01:25:00.739 { 01:25:00.739 "name": null, 01:25:00.740 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:00.740 "is_configured": false, 01:25:00.740 "data_offset": 0, 01:25:00.740 "data_size": 63488 01:25:00.740 }, 01:25:00.740 { 01:25:00.740 "name": null, 01:25:00.740 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:00.740 "is_configured": false, 01:25:00.740 "data_offset": 2048, 01:25:00.740 "data_size": 63488 01:25:00.740 }, 01:25:00.740 { 01:25:00.740 "name": null, 01:25:00.740 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:00.740 "is_configured": false, 01:25:00.740 "data_offset": 2048, 01:25:00.740 "data_size": 63488 01:25:00.740 } 01:25:00.740 ] 01:25:00.740 }' 01:25:00.740 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:00.740 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.308 [2024-12-09 05:19:52.655755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:25:01.308 [2024-12-09 05:19:52.656078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:01.308 [2024-12-09 05:19:52.656257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:25:01.308 [2024-12-09 05:19:52.656469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:01.308 [2024-12-09 05:19:52.657308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:01.308 [2024-12-09 05:19:52.657521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:25:01.308 [2024-12-09 05:19:52.657851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:25:01.308 [2024-12-09 05:19:52.658068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:25:01.308 pt2 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.308 [2024-12-09 05:19:52.667668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:25:01.308 [2024-12-09 05:19:52.667766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:01.308 [2024-12-09 05:19:52.667799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:25:01.308 [2024-12-09 05:19:52.667815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:01.308 [2024-12-09 05:19:52.668325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:01.308 [2024-12-09 05:19:52.668392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:25:01.308 [2024-12-09 05:19:52.668489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:25:01.308 [2024-12-09 05:19:52.668532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:25:01.308 pt3 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.308 [2024-12-09 05:19:52.675637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:25:01.308 [2024-12-09 05:19:52.675926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:01.308 [2024-12-09 05:19:52.676034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:25:01.308 [2024-12-09 05:19:52.676235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:01.308 [2024-12-09 05:19:52.676856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:01.308 [2024-12-09 05:19:52.676893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:25:01.308 [2024-12-09 05:19:52.676992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:25:01.308 [2024-12-09 05:19:52.677027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:25:01.308 [2024-12-09 05:19:52.677253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:25:01.308 [2024-12-09 05:19:52.677278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:01.308 [2024-12-09 05:19:52.677661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:25:01.308 [2024-12-09 05:19:52.677938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:25:01.308 [2024-12-09 05:19:52.677977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:25:01.308 [2024-12-09 05:19:52.678130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:01.308 pt4 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:01.308 "name": "raid_bdev1", 01:25:01.308 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:25:01.308 "strip_size_kb": 64, 01:25:01.308 "state": "online", 01:25:01.308 "raid_level": "raid0", 01:25:01.308 "superblock": true, 01:25:01.308 "num_base_bdevs": 4, 01:25:01.308 "num_base_bdevs_discovered": 4, 01:25:01.308 "num_base_bdevs_operational": 4, 01:25:01.308 "base_bdevs_list": [ 01:25:01.308 { 01:25:01.308 "name": "pt1", 01:25:01.308 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:01.308 "is_configured": true, 01:25:01.308 "data_offset": 2048, 01:25:01.308 "data_size": 63488 01:25:01.308 }, 01:25:01.308 { 01:25:01.308 "name": "pt2", 01:25:01.308 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:01.308 "is_configured": true, 01:25:01.308 "data_offset": 2048, 01:25:01.308 "data_size": 63488 01:25:01.308 }, 01:25:01.308 { 01:25:01.308 "name": "pt3", 01:25:01.308 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:01.308 "is_configured": true, 01:25:01.308 "data_offset": 2048, 01:25:01.308 "data_size": 63488 01:25:01.308 }, 01:25:01.308 { 01:25:01.308 "name": "pt4", 01:25:01.308 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:01.308 "is_configured": true, 01:25:01.308 "data_offset": 2048, 01:25:01.308 "data_size": 63488 01:25:01.308 } 01:25:01.308 ] 01:25:01.308 }' 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:01.308 05:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.875 [2024-12-09 05:19:53.228301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:01.875 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:01.875 "name": "raid_bdev1", 01:25:01.875 "aliases": [ 01:25:01.875 "de633bf5-a554-4554-a355-751055624650" 01:25:01.875 ], 01:25:01.875 "product_name": "Raid Volume", 01:25:01.875 "block_size": 512, 01:25:01.875 "num_blocks": 253952, 01:25:01.875 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:25:01.875 "assigned_rate_limits": { 01:25:01.875 "rw_ios_per_sec": 0, 01:25:01.875 "rw_mbytes_per_sec": 0, 01:25:01.875 "r_mbytes_per_sec": 0, 01:25:01.875 "w_mbytes_per_sec": 0 01:25:01.875 }, 01:25:01.875 "claimed": false, 01:25:01.875 "zoned": false, 01:25:01.875 "supported_io_types": { 01:25:01.875 "read": true, 01:25:01.875 "write": true, 01:25:01.876 "unmap": true, 01:25:01.876 "flush": true, 01:25:01.876 "reset": true, 01:25:01.876 "nvme_admin": false, 01:25:01.876 "nvme_io": false, 01:25:01.876 "nvme_io_md": false, 01:25:01.876 "write_zeroes": true, 01:25:01.876 "zcopy": false, 01:25:01.876 "get_zone_info": false, 01:25:01.876 "zone_management": false, 01:25:01.876 "zone_append": false, 01:25:01.876 "compare": false, 01:25:01.876 "compare_and_write": false, 01:25:01.876 "abort": false, 01:25:01.876 "seek_hole": false, 01:25:01.876 "seek_data": false, 01:25:01.876 "copy": false, 01:25:01.876 "nvme_iov_md": false 01:25:01.876 }, 01:25:01.876 "memory_domains": [ 01:25:01.876 { 01:25:01.876 "dma_device_id": "system", 01:25:01.876 "dma_device_type": 1 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:01.876 "dma_device_type": 2 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "system", 01:25:01.876 "dma_device_type": 1 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:01.876 "dma_device_type": 2 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "system", 01:25:01.876 "dma_device_type": 1 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:01.876 "dma_device_type": 2 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "system", 01:25:01.876 "dma_device_type": 1 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:01.876 "dma_device_type": 2 01:25:01.876 } 01:25:01.876 ], 01:25:01.876 "driver_specific": { 01:25:01.876 "raid": { 01:25:01.876 "uuid": "de633bf5-a554-4554-a355-751055624650", 01:25:01.876 "strip_size_kb": 64, 01:25:01.876 "state": "online", 01:25:01.876 "raid_level": "raid0", 01:25:01.876 "superblock": true, 01:25:01.876 "num_base_bdevs": 4, 01:25:01.876 "num_base_bdevs_discovered": 4, 01:25:01.876 "num_base_bdevs_operational": 4, 01:25:01.876 "base_bdevs_list": [ 01:25:01.876 { 01:25:01.876 "name": "pt1", 01:25:01.876 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:01.876 "is_configured": true, 01:25:01.876 "data_offset": 2048, 01:25:01.876 "data_size": 63488 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "name": "pt2", 01:25:01.876 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:01.876 "is_configured": true, 01:25:01.876 "data_offset": 2048, 01:25:01.876 "data_size": 63488 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "name": "pt3", 01:25:01.876 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:01.876 "is_configured": true, 01:25:01.876 "data_offset": 2048, 01:25:01.876 "data_size": 63488 01:25:01.876 }, 01:25:01.876 { 01:25:01.876 "name": "pt4", 01:25:01.876 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:01.876 "is_configured": true, 01:25:01.876 "data_offset": 2048, 01:25:01.876 "data_size": 63488 01:25:01.876 } 01:25:01.876 ] 01:25:01.876 } 01:25:01.876 } 01:25:01.876 }' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:25:01.876 pt2 01:25:01.876 pt3 01:25:01.876 pt4' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:01.876 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:02.135 [2024-12-09 05:19:53.616405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de633bf5-a554-4554-a355-751055624650 '!=' de633bf5-a554-4554-a355-751055624650 ']' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70755 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70755 ']' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70755 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70755 01:25:02.135 killing process with pid 70755 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70755' 01:25:02.135 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70755 01:25:02.136 [2024-12-09 05:19:53.697294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:02.136 05:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70755 01:25:02.136 [2024-12-09 05:19:53.697468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:02.136 [2024-12-09 05:19:53.697594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:02.136 [2024-12-09 05:19:53.697614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:25:02.703 [2024-12-09 05:19:54.022104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:03.635 05:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:25:03.635 01:25:03.635 real 0m5.885s 01:25:03.635 user 0m8.817s 01:25:03.635 sys 0m0.890s 01:25:03.635 05:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:03.635 ************************************ 01:25:03.635 END TEST raid_superblock_test 01:25:03.635 ************************************ 01:25:03.635 05:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:03.635 05:19:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 01:25:03.635 05:19:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:03.635 05:19:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:03.635 05:19:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:03.635 ************************************ 01:25:03.635 START TEST raid_read_error_test 01:25:03.635 ************************************ 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:25:03.635 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HMPRRWWQob 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71027 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71027 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71027 ']' 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:03.636 05:19:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:03.941 [2024-12-09 05:19:55.261886] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:03.941 [2024-12-09 05:19:55.262072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71027 ] 01:25:03.941 [2024-12-09 05:19:55.442779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:04.224 [2024-12-09 05:19:55.621193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:04.481 [2024-12-09 05:19:55.843155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:04.481 [2024-12-09 05:19:55.843220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.739 BaseBdev1_malloc 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.739 true 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.739 [2024-12-09 05:19:56.296227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:25:04.739 [2024-12-09 05:19:56.296506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:04.739 [2024-12-09 05:19:56.296579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:25:04.739 [2024-12-09 05:19:56.296603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:04.739 [2024-12-09 05:19:56.299286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:04.739 [2024-12-09 05:19:56.299347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:25:04.739 BaseBdev1 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.739 BaseBdev2_malloc 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.739 true 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.739 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 [2024-12-09 05:19:56.357718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:25:04.998 [2024-12-09 05:19:56.358057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:04.998 [2024-12-09 05:19:56.358142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:25:04.998 [2024-12-09 05:19:56.358452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:04.998 BaseBdev2 01:25:04.998 [2024-12-09 05:19:56.362019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:04.998 [2024-12-09 05:19:56.362077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 BaseBdev3_malloc 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 true 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 [2024-12-09 05:19:56.426026] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:25:04.998 [2024-12-09 05:19:56.426072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:04.998 [2024-12-09 05:19:56.426109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:25:04.998 [2024-12-09 05:19:56.426155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:04.998 [2024-12-09 05:19:56.428963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:04.998 BaseBdev3 01:25:04.998 [2024-12-09 05:19:56.429172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 BaseBdev4_malloc 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 true 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 [2024-12-09 05:19:56.482113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 01:25:04.998 [2024-12-09 05:19:56.482216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:04.998 [2024-12-09 05:19:56.482276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:25:04.998 [2024-12-09 05:19:56.482430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:04.998 [2024-12-09 05:19:56.485141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:04.998 [2024-12-09 05:19:56.485337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:25:04.998 BaseBdev4 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.998 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.998 [2024-12-09 05:19:56.490233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:04.998 [2024-12-09 05:19:56.492617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:04.999 [2024-12-09 05:19:56.492842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:04.999 [2024-12-09 05:19:56.492974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:04.999 [2024-12-09 05:19:56.493307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 01:25:04.999 [2024-12-09 05:19:56.493336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:04.999 [2024-12-09 05:19:56.493724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 01:25:04.999 [2024-12-09 05:19:56.493966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 01:25:04.999 [2024-12-09 05:19:56.493982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 01:25:04.999 [2024-12-09 05:19:56.494198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:04.999 "name": "raid_bdev1", 01:25:04.999 "uuid": "6b9ef184-cf57-423d-ace3-21b1692c8046", 01:25:04.999 "strip_size_kb": 64, 01:25:04.999 "state": "online", 01:25:04.999 "raid_level": "raid0", 01:25:04.999 "superblock": true, 01:25:04.999 "num_base_bdevs": 4, 01:25:04.999 "num_base_bdevs_discovered": 4, 01:25:04.999 "num_base_bdevs_operational": 4, 01:25:04.999 "base_bdevs_list": [ 01:25:04.999 { 01:25:04.999 "name": "BaseBdev1", 01:25:04.999 "uuid": "b3e91258-02ca-53f7-9106-5dbad4f7a478", 01:25:04.999 "is_configured": true, 01:25:04.999 "data_offset": 2048, 01:25:04.999 "data_size": 63488 01:25:04.999 }, 01:25:04.999 { 01:25:04.999 "name": "BaseBdev2", 01:25:04.999 "uuid": "6f57ab4f-4fb0-5384-8043-595f80136775", 01:25:04.999 "is_configured": true, 01:25:04.999 "data_offset": 2048, 01:25:04.999 "data_size": 63488 01:25:04.999 }, 01:25:04.999 { 01:25:04.999 "name": "BaseBdev3", 01:25:04.999 "uuid": "be775a34-8b86-59bc-90af-7cfc70cfc0b8", 01:25:04.999 "is_configured": true, 01:25:04.999 "data_offset": 2048, 01:25:04.999 "data_size": 63488 01:25:04.999 }, 01:25:04.999 { 01:25:04.999 "name": "BaseBdev4", 01:25:04.999 "uuid": "3df29d32-c6b9-593b-b2cf-3cdbc58b9859", 01:25:04.999 "is_configured": true, 01:25:04.999 "data_offset": 2048, 01:25:04.999 "data_size": 63488 01:25:04.999 } 01:25:04.999 ] 01:25:04.999 }' 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:04.999 05:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:05.563 05:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:25:05.564 05:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:25:05.564 [2024-12-09 05:19:57.143826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:06.496 "name": "raid_bdev1", 01:25:06.496 "uuid": "6b9ef184-cf57-423d-ace3-21b1692c8046", 01:25:06.496 "strip_size_kb": 64, 01:25:06.496 "state": "online", 01:25:06.496 "raid_level": "raid0", 01:25:06.496 "superblock": true, 01:25:06.496 "num_base_bdevs": 4, 01:25:06.496 "num_base_bdevs_discovered": 4, 01:25:06.496 "num_base_bdevs_operational": 4, 01:25:06.496 "base_bdevs_list": [ 01:25:06.496 { 01:25:06.496 "name": "BaseBdev1", 01:25:06.496 "uuid": "b3e91258-02ca-53f7-9106-5dbad4f7a478", 01:25:06.496 "is_configured": true, 01:25:06.496 "data_offset": 2048, 01:25:06.496 "data_size": 63488 01:25:06.496 }, 01:25:06.496 { 01:25:06.496 "name": "BaseBdev2", 01:25:06.496 "uuid": "6f57ab4f-4fb0-5384-8043-595f80136775", 01:25:06.496 "is_configured": true, 01:25:06.496 "data_offset": 2048, 01:25:06.496 "data_size": 63488 01:25:06.496 }, 01:25:06.496 { 01:25:06.496 "name": "BaseBdev3", 01:25:06.496 "uuid": "be775a34-8b86-59bc-90af-7cfc70cfc0b8", 01:25:06.496 "is_configured": true, 01:25:06.496 "data_offset": 2048, 01:25:06.496 "data_size": 63488 01:25:06.496 }, 01:25:06.496 { 01:25:06.496 "name": "BaseBdev4", 01:25:06.496 "uuid": "3df29d32-c6b9-593b-b2cf-3cdbc58b9859", 01:25:06.496 "is_configured": true, 01:25:06.496 "data_offset": 2048, 01:25:06.496 "data_size": 63488 01:25:06.496 } 01:25:06.496 ] 01:25:06.496 }' 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:06.496 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:07.062 [2024-12-09 05:19:58.597049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:07.062 [2024-12-09 05:19:58.597098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:07.062 { 01:25:07.062 "results": [ 01:25:07.062 { 01:25:07.062 "job": "raid_bdev1", 01:25:07.062 "core_mask": "0x1", 01:25:07.062 "workload": "randrw", 01:25:07.062 "percentage": 50, 01:25:07.062 "status": "finished", 01:25:07.062 "queue_depth": 1, 01:25:07.062 "io_size": 131072, 01:25:07.062 "runtime": 1.451023, 01:25:07.062 "iops": 10221.753893632285, 01:25:07.062 "mibps": 1277.7192367040357, 01:25:07.062 "io_failed": 1, 01:25:07.062 "io_timeout": 0, 01:25:07.062 "avg_latency_us": 136.65293430495885, 01:25:07.062 "min_latency_us": 35.14181818181818, 01:25:07.062 "max_latency_us": 1824.581818181818 01:25:07.062 } 01:25:07.062 ], 01:25:07.062 "core_count": 1 01:25:07.062 } 01:25:07.062 [2024-12-09 05:19:58.600929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:07.062 [2024-12-09 05:19:58.601071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:07.062 [2024-12-09 05:19:58.601154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:07.062 [2024-12-09 05:19:58.601180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71027 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71027 ']' 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71027 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71027 01:25:07.062 killing process with pid 71027 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71027' 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71027 01:25:07.062 [2024-12-09 05:19:58.644878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:07.062 05:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71027 01:25:07.628 [2024-12-09 05:19:58.936589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HMPRRWWQob 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 01:25:08.563 01:25:08.563 real 0m4.984s 01:25:08.563 user 0m6.097s 01:25:08.563 sys 0m0.648s 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:08.563 ************************************ 01:25:08.563 END TEST raid_read_error_test 01:25:08.563 ************************************ 01:25:08.563 05:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:08.563 05:20:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 01:25:08.563 05:20:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:08.563 05:20:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:08.563 05:20:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:08.822 ************************************ 01:25:08.822 START TEST raid_write_error_test 01:25:08.822 ************************************ 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.diyCp8UEnS 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71174 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71174 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71174 ']' 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:08.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:08.822 05:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:08.822 [2024-12-09 05:20:00.315421] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:08.822 [2024-12-09 05:20:00.315600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71174 ] 01:25:09.080 [2024-12-09 05:20:00.506053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:09.080 [2024-12-09 05:20:00.642037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:09.339 [2024-12-09 05:20:00.861845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:09.339 [2024-12-09 05:20:00.861952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 BaseBdev1_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 true 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 [2024-12-09 05:20:01.371422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:25:09.906 [2024-12-09 05:20:01.371513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:09.906 [2024-12-09 05:20:01.371545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:25:09.906 [2024-12-09 05:20:01.371564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:09.906 [2024-12-09 05:20:01.374856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:09.906 [2024-12-09 05:20:01.374949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:25:09.906 BaseBdev1 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 BaseBdev2_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 true 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 [2024-12-09 05:20:01.426323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:25:09.906 [2024-12-09 05:20:01.426427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:09.906 [2024-12-09 05:20:01.426453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:25:09.906 [2024-12-09 05:20:01.426470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:09.906 [2024-12-09 05:20:01.429267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:09.906 [2024-12-09 05:20:01.429312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:25:09.906 BaseBdev2 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 BaseBdev3_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 true 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:09.906 [2024-12-09 05:20:01.494198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:25:09.906 [2024-12-09 05:20:01.494274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:09.906 [2024-12-09 05:20:01.494298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:25:09.906 [2024-12-09 05:20:01.494315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:09.906 [2024-12-09 05:20:01.497174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:09.906 [2024-12-09 05:20:01.497249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:25:09.906 BaseBdev3 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:09.906 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:10.164 BaseBdev4_malloc 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:10.164 true 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:10.164 [2024-12-09 05:20:01.551931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 01:25:10.164 [2024-12-09 05:20:01.552007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:10.164 [2024-12-09 05:20:01.552034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:25:10.164 [2024-12-09 05:20:01.552050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:10.164 BaseBdev4 01:25:10.164 [2024-12-09 05:20:01.555032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:10.164 [2024-12-09 05:20:01.555080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:25:10.164 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:10.165 [2024-12-09 05:20:01.560043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:10.165 [2024-12-09 05:20:01.562662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:10.165 [2024-12-09 05:20:01.562827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:10.165 [2024-12-09 05:20:01.562947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:10.165 [2024-12-09 05:20:01.563212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 01:25:10.165 [2024-12-09 05:20:01.563241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:10.165 [2024-12-09 05:20:01.563618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 01:25:10.165 [2024-12-09 05:20:01.563887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 01:25:10.165 [2024-12-09 05:20:01.563910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 01:25:10.165 [2024-12-09 05:20:01.564182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:10.165 "name": "raid_bdev1", 01:25:10.165 "uuid": "b25eaa2f-eecc-47f0-8406-946f6d991ad0", 01:25:10.165 "strip_size_kb": 64, 01:25:10.165 "state": "online", 01:25:10.165 "raid_level": "raid0", 01:25:10.165 "superblock": true, 01:25:10.165 "num_base_bdevs": 4, 01:25:10.165 "num_base_bdevs_discovered": 4, 01:25:10.165 "num_base_bdevs_operational": 4, 01:25:10.165 "base_bdevs_list": [ 01:25:10.165 { 01:25:10.165 "name": "BaseBdev1", 01:25:10.165 "uuid": "b277fdd7-4d19-50c2-bee3-094325bfe7ce", 01:25:10.165 "is_configured": true, 01:25:10.165 "data_offset": 2048, 01:25:10.165 "data_size": 63488 01:25:10.165 }, 01:25:10.165 { 01:25:10.165 "name": "BaseBdev2", 01:25:10.165 "uuid": "23a47431-d536-5e15-94a6-6dbc58316601", 01:25:10.165 "is_configured": true, 01:25:10.165 "data_offset": 2048, 01:25:10.165 "data_size": 63488 01:25:10.165 }, 01:25:10.165 { 01:25:10.165 "name": "BaseBdev3", 01:25:10.165 "uuid": "acae3631-9286-5e4b-8086-54cb78d62f63", 01:25:10.165 "is_configured": true, 01:25:10.165 "data_offset": 2048, 01:25:10.165 "data_size": 63488 01:25:10.165 }, 01:25:10.165 { 01:25:10.165 "name": "BaseBdev4", 01:25:10.165 "uuid": "f776702c-3ea6-56d8-b647-a8f8b2741ed6", 01:25:10.165 "is_configured": true, 01:25:10.165 "data_offset": 2048, 01:25:10.165 "data_size": 63488 01:25:10.165 } 01:25:10.165 ] 01:25:10.165 }' 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:10.165 05:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:10.731 05:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:25:10.731 05:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:25:10.731 [2024-12-09 05:20:02.189753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:11.666 "name": "raid_bdev1", 01:25:11.666 "uuid": "b25eaa2f-eecc-47f0-8406-946f6d991ad0", 01:25:11.666 "strip_size_kb": 64, 01:25:11.666 "state": "online", 01:25:11.666 "raid_level": "raid0", 01:25:11.666 "superblock": true, 01:25:11.666 "num_base_bdevs": 4, 01:25:11.666 "num_base_bdevs_discovered": 4, 01:25:11.666 "num_base_bdevs_operational": 4, 01:25:11.666 "base_bdevs_list": [ 01:25:11.666 { 01:25:11.666 "name": "BaseBdev1", 01:25:11.666 "uuid": "b277fdd7-4d19-50c2-bee3-094325bfe7ce", 01:25:11.666 "is_configured": true, 01:25:11.666 "data_offset": 2048, 01:25:11.666 "data_size": 63488 01:25:11.666 }, 01:25:11.666 { 01:25:11.666 "name": "BaseBdev2", 01:25:11.666 "uuid": "23a47431-d536-5e15-94a6-6dbc58316601", 01:25:11.666 "is_configured": true, 01:25:11.666 "data_offset": 2048, 01:25:11.666 "data_size": 63488 01:25:11.666 }, 01:25:11.666 { 01:25:11.666 "name": "BaseBdev3", 01:25:11.666 "uuid": "acae3631-9286-5e4b-8086-54cb78d62f63", 01:25:11.666 "is_configured": true, 01:25:11.666 "data_offset": 2048, 01:25:11.666 "data_size": 63488 01:25:11.666 }, 01:25:11.666 { 01:25:11.666 "name": "BaseBdev4", 01:25:11.666 "uuid": "f776702c-3ea6-56d8-b647-a8f8b2741ed6", 01:25:11.666 "is_configured": true, 01:25:11.666 "data_offset": 2048, 01:25:11.666 "data_size": 63488 01:25:11.666 } 01:25:11.666 ] 01:25:11.666 }' 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:11.666 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:12.233 [2024-12-09 05:20:03.634187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:12.233 [2024-12-09 05:20:03.634245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:12.233 [2024-12-09 05:20:03.637818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:12.233 [2024-12-09 05:20:03.637937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:12.233 [2024-12-09 05:20:03.637995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:12.233 [2024-12-09 05:20:03.638023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 01:25:12.233 { 01:25:12.233 "results": [ 01:25:12.233 { 01:25:12.233 "job": "raid_bdev1", 01:25:12.233 "core_mask": "0x1", 01:25:12.233 "workload": "randrw", 01:25:12.233 "percentage": 50, 01:25:12.233 "status": "finished", 01:25:12.233 "queue_depth": 1, 01:25:12.233 "io_size": 131072, 01:25:12.233 "runtime": 1.44203, 01:25:12.233 "iops": 9986.616089817826, 01:25:12.233 "mibps": 1248.3270112272282, 01:25:12.233 "io_failed": 1, 01:25:12.233 "io_timeout": 0, 01:25:12.233 "avg_latency_us": 139.78773655174155, 01:25:12.233 "min_latency_us": 35.374545454545455, 01:25:12.233 "max_latency_us": 2025.658181818182 01:25:12.233 } 01:25:12.233 ], 01:25:12.233 "core_count": 1 01:25:12.233 } 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71174 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71174 ']' 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71174 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71174 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71174' 01:25:12.233 killing process with pid 71174 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71174 01:25:12.233 [2024-12-09 05:20:03.675630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:12.233 05:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71174 01:25:12.491 [2024-12-09 05:20:03.975133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.diyCp8UEnS 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 01:25:13.865 01:25:13.865 real 0m4.994s 01:25:13.865 user 0m6.101s 01:25:13.865 sys 0m0.632s 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:13.865 ************************************ 01:25:13.865 END TEST raid_write_error_test 01:25:13.865 ************************************ 01:25:13.865 05:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:13.865 05:20:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:25:13.865 05:20:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 01:25:13.865 05:20:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:13.865 05:20:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:13.865 05:20:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:13.865 ************************************ 01:25:13.865 START TEST raid_state_function_test 01:25:13.865 ************************************ 01:25:13.865 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 01:25:13.865 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 01:25:13.865 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:25:13.865 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71325 01:25:13.866 Process raid pid: 71325 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71325' 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71325 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71325 ']' 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:13.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:13.866 05:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:13.866 [2024-12-09 05:20:05.332105] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:13.866 [2024-12-09 05:20:05.332279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:14.124 [2024-12-09 05:20:05.506848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:14.124 [2024-12-09 05:20:05.637838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:14.381 [2024-12-09 05:20:05.847729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:14.381 [2024-12-09 05:20:05.847787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:14.946 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:14.947 [2024-12-09 05:20:06.381822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:14.947 [2024-12-09 05:20:06.381900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:14.947 [2024-12-09 05:20:06.381919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:14.947 [2024-12-09 05:20:06.381936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:14.947 [2024-12-09 05:20:06.381946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:14.947 [2024-12-09 05:20:06.381961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:14.947 [2024-12-09 05:20:06.381971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:14.947 [2024-12-09 05:20:06.381984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:14.947 "name": "Existed_Raid", 01:25:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:14.947 "strip_size_kb": 64, 01:25:14.947 "state": "configuring", 01:25:14.947 "raid_level": "concat", 01:25:14.947 "superblock": false, 01:25:14.947 "num_base_bdevs": 4, 01:25:14.947 "num_base_bdevs_discovered": 0, 01:25:14.947 "num_base_bdevs_operational": 4, 01:25:14.947 "base_bdevs_list": [ 01:25:14.947 { 01:25:14.947 "name": "BaseBdev1", 01:25:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:14.947 "is_configured": false, 01:25:14.947 "data_offset": 0, 01:25:14.947 "data_size": 0 01:25:14.947 }, 01:25:14.947 { 01:25:14.947 "name": "BaseBdev2", 01:25:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:14.947 "is_configured": false, 01:25:14.947 "data_offset": 0, 01:25:14.947 "data_size": 0 01:25:14.947 }, 01:25:14.947 { 01:25:14.947 "name": "BaseBdev3", 01:25:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:14.947 "is_configured": false, 01:25:14.947 "data_offset": 0, 01:25:14.947 "data_size": 0 01:25:14.947 }, 01:25:14.947 { 01:25:14.947 "name": "BaseBdev4", 01:25:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:14.947 "is_configured": false, 01:25:14.947 "data_offset": 0, 01:25:14.947 "data_size": 0 01:25:14.947 } 01:25:14.947 ] 01:25:14.947 }' 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:14.947 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 [2024-12-09 05:20:06.873919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:15.526 [2024-12-09 05:20:06.873997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 [2024-12-09 05:20:06.881880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:15.526 [2024-12-09 05:20:06.881938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:15.526 [2024-12-09 05:20:06.881963] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:15.526 [2024-12-09 05:20:06.881980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:15.526 [2024-12-09 05:20:06.881992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:15.526 [2024-12-09 05:20:06.882005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:15.526 [2024-12-09 05:20:06.882015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:15.526 [2024-12-09 05:20:06.882029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 [2024-12-09 05:20:06.928298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:15.526 BaseBdev1 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 [ 01:25:15.526 { 01:25:15.526 "name": "BaseBdev1", 01:25:15.526 "aliases": [ 01:25:15.526 "140e6ae1-c7b4-4cea-b277-c5738cff0636" 01:25:15.526 ], 01:25:15.526 "product_name": "Malloc disk", 01:25:15.526 "block_size": 512, 01:25:15.526 "num_blocks": 65536, 01:25:15.526 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:15.526 "assigned_rate_limits": { 01:25:15.526 "rw_ios_per_sec": 0, 01:25:15.526 "rw_mbytes_per_sec": 0, 01:25:15.526 "r_mbytes_per_sec": 0, 01:25:15.526 "w_mbytes_per_sec": 0 01:25:15.526 }, 01:25:15.526 "claimed": true, 01:25:15.526 "claim_type": "exclusive_write", 01:25:15.526 "zoned": false, 01:25:15.526 "supported_io_types": { 01:25:15.526 "read": true, 01:25:15.526 "write": true, 01:25:15.526 "unmap": true, 01:25:15.526 "flush": true, 01:25:15.526 "reset": true, 01:25:15.526 "nvme_admin": false, 01:25:15.526 "nvme_io": false, 01:25:15.526 "nvme_io_md": false, 01:25:15.526 "write_zeroes": true, 01:25:15.526 "zcopy": true, 01:25:15.526 "get_zone_info": false, 01:25:15.526 "zone_management": false, 01:25:15.526 "zone_append": false, 01:25:15.526 "compare": false, 01:25:15.526 "compare_and_write": false, 01:25:15.526 "abort": true, 01:25:15.526 "seek_hole": false, 01:25:15.526 "seek_data": false, 01:25:15.526 "copy": true, 01:25:15.526 "nvme_iov_md": false 01:25:15.526 }, 01:25:15.526 "memory_domains": [ 01:25:15.526 { 01:25:15.526 "dma_device_id": "system", 01:25:15.526 "dma_device_type": 1 01:25:15.526 }, 01:25:15.526 { 01:25:15.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:15.526 "dma_device_type": 2 01:25:15.526 } 01:25:15.526 ], 01:25:15.526 "driver_specific": {} 01:25:15.526 } 01:25:15.526 ] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:15.526 05:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:15.526 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:15.526 "name": "Existed_Raid", 01:25:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:15.526 "strip_size_kb": 64, 01:25:15.526 "state": "configuring", 01:25:15.526 "raid_level": "concat", 01:25:15.526 "superblock": false, 01:25:15.526 "num_base_bdevs": 4, 01:25:15.526 "num_base_bdevs_discovered": 1, 01:25:15.526 "num_base_bdevs_operational": 4, 01:25:15.526 "base_bdevs_list": [ 01:25:15.526 { 01:25:15.526 "name": "BaseBdev1", 01:25:15.526 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:15.526 "is_configured": true, 01:25:15.526 "data_offset": 0, 01:25:15.526 "data_size": 65536 01:25:15.526 }, 01:25:15.526 { 01:25:15.526 "name": "BaseBdev2", 01:25:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:15.526 "is_configured": false, 01:25:15.526 "data_offset": 0, 01:25:15.526 "data_size": 0 01:25:15.526 }, 01:25:15.526 { 01:25:15.526 "name": "BaseBdev3", 01:25:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:15.526 "is_configured": false, 01:25:15.526 "data_offset": 0, 01:25:15.526 "data_size": 0 01:25:15.526 }, 01:25:15.526 { 01:25:15.526 "name": "BaseBdev4", 01:25:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:15.526 "is_configured": false, 01:25:15.526 "data_offset": 0, 01:25:15.526 "data_size": 0 01:25:15.526 } 01:25:15.526 ] 01:25:15.526 }' 01:25:15.526 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:15.526 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.091 [2024-12-09 05:20:07.484535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:16.091 [2024-12-09 05:20:07.484599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.091 [2024-12-09 05:20:07.492602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:16.091 [2024-12-09 05:20:07.495155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:16.091 [2024-12-09 05:20:07.495341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:16.091 [2024-12-09 05:20:07.495474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:16.091 [2024-12-09 05:20:07.495537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:16.091 [2024-12-09 05:20:07.495758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:16.091 [2024-12-09 05:20:07.495818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:16.091 "name": "Existed_Raid", 01:25:16.091 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.091 "strip_size_kb": 64, 01:25:16.091 "state": "configuring", 01:25:16.091 "raid_level": "concat", 01:25:16.091 "superblock": false, 01:25:16.091 "num_base_bdevs": 4, 01:25:16.091 "num_base_bdevs_discovered": 1, 01:25:16.091 "num_base_bdevs_operational": 4, 01:25:16.091 "base_bdevs_list": [ 01:25:16.091 { 01:25:16.091 "name": "BaseBdev1", 01:25:16.091 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:16.091 "is_configured": true, 01:25:16.091 "data_offset": 0, 01:25:16.091 "data_size": 65536 01:25:16.091 }, 01:25:16.091 { 01:25:16.091 "name": "BaseBdev2", 01:25:16.091 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.091 "is_configured": false, 01:25:16.091 "data_offset": 0, 01:25:16.091 "data_size": 0 01:25:16.091 }, 01:25:16.091 { 01:25:16.091 "name": "BaseBdev3", 01:25:16.091 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.091 "is_configured": false, 01:25:16.091 "data_offset": 0, 01:25:16.091 "data_size": 0 01:25:16.091 }, 01:25:16.091 { 01:25:16.091 "name": "BaseBdev4", 01:25:16.091 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.091 "is_configured": false, 01:25:16.091 "data_offset": 0, 01:25:16.091 "data_size": 0 01:25:16.091 } 01:25:16.091 ] 01:25:16.091 }' 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:16.091 05:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.656 [2024-12-09 05:20:08.043932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:16.656 BaseBdev2 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.656 [ 01:25:16.656 { 01:25:16.656 "name": "BaseBdev2", 01:25:16.656 "aliases": [ 01:25:16.656 "ba8c32ba-e680-454a-83cc-6143859ff2e0" 01:25:16.656 ], 01:25:16.656 "product_name": "Malloc disk", 01:25:16.656 "block_size": 512, 01:25:16.656 "num_blocks": 65536, 01:25:16.656 "uuid": "ba8c32ba-e680-454a-83cc-6143859ff2e0", 01:25:16.656 "assigned_rate_limits": { 01:25:16.656 "rw_ios_per_sec": 0, 01:25:16.656 "rw_mbytes_per_sec": 0, 01:25:16.656 "r_mbytes_per_sec": 0, 01:25:16.656 "w_mbytes_per_sec": 0 01:25:16.656 }, 01:25:16.656 "claimed": true, 01:25:16.656 "claim_type": "exclusive_write", 01:25:16.656 "zoned": false, 01:25:16.656 "supported_io_types": { 01:25:16.656 "read": true, 01:25:16.656 "write": true, 01:25:16.656 "unmap": true, 01:25:16.656 "flush": true, 01:25:16.656 "reset": true, 01:25:16.656 "nvme_admin": false, 01:25:16.656 "nvme_io": false, 01:25:16.656 "nvme_io_md": false, 01:25:16.656 "write_zeroes": true, 01:25:16.656 "zcopy": true, 01:25:16.656 "get_zone_info": false, 01:25:16.656 "zone_management": false, 01:25:16.656 "zone_append": false, 01:25:16.656 "compare": false, 01:25:16.656 "compare_and_write": false, 01:25:16.656 "abort": true, 01:25:16.656 "seek_hole": false, 01:25:16.656 "seek_data": false, 01:25:16.656 "copy": true, 01:25:16.656 "nvme_iov_md": false 01:25:16.656 }, 01:25:16.656 "memory_domains": [ 01:25:16.656 { 01:25:16.656 "dma_device_id": "system", 01:25:16.656 "dma_device_type": 1 01:25:16.656 }, 01:25:16.656 { 01:25:16.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:16.656 "dma_device_type": 2 01:25:16.656 } 01:25:16.656 ], 01:25:16.656 "driver_specific": {} 01:25:16.656 } 01:25:16.656 ] 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:16.656 "name": "Existed_Raid", 01:25:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.656 "strip_size_kb": 64, 01:25:16.656 "state": "configuring", 01:25:16.656 "raid_level": "concat", 01:25:16.656 "superblock": false, 01:25:16.656 "num_base_bdevs": 4, 01:25:16.656 "num_base_bdevs_discovered": 2, 01:25:16.656 "num_base_bdevs_operational": 4, 01:25:16.656 "base_bdevs_list": [ 01:25:16.656 { 01:25:16.656 "name": "BaseBdev1", 01:25:16.656 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:16.656 "is_configured": true, 01:25:16.656 "data_offset": 0, 01:25:16.656 "data_size": 65536 01:25:16.656 }, 01:25:16.656 { 01:25:16.656 "name": "BaseBdev2", 01:25:16.656 "uuid": "ba8c32ba-e680-454a-83cc-6143859ff2e0", 01:25:16.656 "is_configured": true, 01:25:16.656 "data_offset": 0, 01:25:16.656 "data_size": 65536 01:25:16.656 }, 01:25:16.656 { 01:25:16.656 "name": "BaseBdev3", 01:25:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.656 "is_configured": false, 01:25:16.656 "data_offset": 0, 01:25:16.656 "data_size": 0 01:25:16.656 }, 01:25:16.656 { 01:25:16.656 "name": "BaseBdev4", 01:25:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:16.656 "is_configured": false, 01:25:16.656 "data_offset": 0, 01:25:16.656 "data_size": 0 01:25:16.656 } 01:25:16.656 ] 01:25:16.656 }' 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:16.656 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.223 BaseBdev3 01:25:17.223 [2024-12-09 05:20:08.643728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.223 [ 01:25:17.223 { 01:25:17.223 "name": "BaseBdev3", 01:25:17.223 "aliases": [ 01:25:17.223 "58e72e10-a121-418e-805b-c49b6a3fdeeb" 01:25:17.223 ], 01:25:17.223 "product_name": "Malloc disk", 01:25:17.223 "block_size": 512, 01:25:17.223 "num_blocks": 65536, 01:25:17.223 "uuid": "58e72e10-a121-418e-805b-c49b6a3fdeeb", 01:25:17.223 "assigned_rate_limits": { 01:25:17.223 "rw_ios_per_sec": 0, 01:25:17.223 "rw_mbytes_per_sec": 0, 01:25:17.223 "r_mbytes_per_sec": 0, 01:25:17.223 "w_mbytes_per_sec": 0 01:25:17.223 }, 01:25:17.223 "claimed": true, 01:25:17.223 "claim_type": "exclusive_write", 01:25:17.223 "zoned": false, 01:25:17.223 "supported_io_types": { 01:25:17.223 "read": true, 01:25:17.223 "write": true, 01:25:17.223 "unmap": true, 01:25:17.223 "flush": true, 01:25:17.223 "reset": true, 01:25:17.223 "nvme_admin": false, 01:25:17.223 "nvme_io": false, 01:25:17.223 "nvme_io_md": false, 01:25:17.223 "write_zeroes": true, 01:25:17.223 "zcopy": true, 01:25:17.223 "get_zone_info": false, 01:25:17.223 "zone_management": false, 01:25:17.223 "zone_append": false, 01:25:17.223 "compare": false, 01:25:17.223 "compare_and_write": false, 01:25:17.223 "abort": true, 01:25:17.223 "seek_hole": false, 01:25:17.223 "seek_data": false, 01:25:17.223 "copy": true, 01:25:17.223 "nvme_iov_md": false 01:25:17.223 }, 01:25:17.223 "memory_domains": [ 01:25:17.223 { 01:25:17.223 "dma_device_id": "system", 01:25:17.223 "dma_device_type": 1 01:25:17.223 }, 01:25:17.223 { 01:25:17.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:17.223 "dma_device_type": 2 01:25:17.223 } 01:25:17.223 ], 01:25:17.223 "driver_specific": {} 01:25:17.223 } 01:25:17.223 ] 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:17.223 "name": "Existed_Raid", 01:25:17.223 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:17.223 "strip_size_kb": 64, 01:25:17.223 "state": "configuring", 01:25:17.223 "raid_level": "concat", 01:25:17.223 "superblock": false, 01:25:17.223 "num_base_bdevs": 4, 01:25:17.223 "num_base_bdevs_discovered": 3, 01:25:17.223 "num_base_bdevs_operational": 4, 01:25:17.223 "base_bdevs_list": [ 01:25:17.223 { 01:25:17.223 "name": "BaseBdev1", 01:25:17.223 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:17.223 "is_configured": true, 01:25:17.223 "data_offset": 0, 01:25:17.223 "data_size": 65536 01:25:17.223 }, 01:25:17.223 { 01:25:17.223 "name": "BaseBdev2", 01:25:17.223 "uuid": "ba8c32ba-e680-454a-83cc-6143859ff2e0", 01:25:17.223 "is_configured": true, 01:25:17.223 "data_offset": 0, 01:25:17.223 "data_size": 65536 01:25:17.223 }, 01:25:17.223 { 01:25:17.223 "name": "BaseBdev3", 01:25:17.223 "uuid": "58e72e10-a121-418e-805b-c49b6a3fdeeb", 01:25:17.223 "is_configured": true, 01:25:17.223 "data_offset": 0, 01:25:17.223 "data_size": 65536 01:25:17.223 }, 01:25:17.223 { 01:25:17.223 "name": "BaseBdev4", 01:25:17.223 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:17.223 "is_configured": false, 01:25:17.223 "data_offset": 0, 01:25:17.223 "data_size": 0 01:25:17.223 } 01:25:17.223 ] 01:25:17.223 }' 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:17.223 05:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.791 [2024-12-09 05:20:09.234734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:17.791 [2024-12-09 05:20:09.234805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:25:17.791 [2024-12-09 05:20:09.234818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 01:25:17.791 [2024-12-09 05:20:09.235162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:25:17.791 [2024-12-09 05:20:09.235397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:25:17.791 [2024-12-09 05:20:09.235419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:25:17.791 BaseBdev4 01:25:17.791 [2024-12-09 05:20:09.235750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.791 [ 01:25:17.791 { 01:25:17.791 "name": "BaseBdev4", 01:25:17.791 "aliases": [ 01:25:17.791 "3217d659-4b61-4215-943b-249a2eb8616a" 01:25:17.791 ], 01:25:17.791 "product_name": "Malloc disk", 01:25:17.791 "block_size": 512, 01:25:17.791 "num_blocks": 65536, 01:25:17.791 "uuid": "3217d659-4b61-4215-943b-249a2eb8616a", 01:25:17.791 "assigned_rate_limits": { 01:25:17.791 "rw_ios_per_sec": 0, 01:25:17.791 "rw_mbytes_per_sec": 0, 01:25:17.791 "r_mbytes_per_sec": 0, 01:25:17.791 "w_mbytes_per_sec": 0 01:25:17.791 }, 01:25:17.791 "claimed": true, 01:25:17.791 "claim_type": "exclusive_write", 01:25:17.791 "zoned": false, 01:25:17.791 "supported_io_types": { 01:25:17.791 "read": true, 01:25:17.791 "write": true, 01:25:17.791 "unmap": true, 01:25:17.791 "flush": true, 01:25:17.791 "reset": true, 01:25:17.791 "nvme_admin": false, 01:25:17.791 "nvme_io": false, 01:25:17.791 "nvme_io_md": false, 01:25:17.791 "write_zeroes": true, 01:25:17.791 "zcopy": true, 01:25:17.791 "get_zone_info": false, 01:25:17.791 "zone_management": false, 01:25:17.791 "zone_append": false, 01:25:17.791 "compare": false, 01:25:17.791 "compare_and_write": false, 01:25:17.791 "abort": true, 01:25:17.791 "seek_hole": false, 01:25:17.791 "seek_data": false, 01:25:17.791 "copy": true, 01:25:17.791 "nvme_iov_md": false 01:25:17.791 }, 01:25:17.791 "memory_domains": [ 01:25:17.791 { 01:25:17.791 "dma_device_id": "system", 01:25:17.791 "dma_device_type": 1 01:25:17.791 }, 01:25:17.791 { 01:25:17.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:17.791 "dma_device_type": 2 01:25:17.791 } 01:25:17.791 ], 01:25:17.791 "driver_specific": {} 01:25:17.791 } 01:25:17.791 ] 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:17.791 "name": "Existed_Raid", 01:25:17.791 "uuid": "4f7a9a09-46a0-4aaa-ab68-3de1182c4413", 01:25:17.791 "strip_size_kb": 64, 01:25:17.791 "state": "online", 01:25:17.791 "raid_level": "concat", 01:25:17.791 "superblock": false, 01:25:17.791 "num_base_bdevs": 4, 01:25:17.791 "num_base_bdevs_discovered": 4, 01:25:17.791 "num_base_bdevs_operational": 4, 01:25:17.791 "base_bdevs_list": [ 01:25:17.791 { 01:25:17.791 "name": "BaseBdev1", 01:25:17.791 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:17.791 "is_configured": true, 01:25:17.791 "data_offset": 0, 01:25:17.791 "data_size": 65536 01:25:17.791 }, 01:25:17.791 { 01:25:17.791 "name": "BaseBdev2", 01:25:17.791 "uuid": "ba8c32ba-e680-454a-83cc-6143859ff2e0", 01:25:17.791 "is_configured": true, 01:25:17.791 "data_offset": 0, 01:25:17.791 "data_size": 65536 01:25:17.791 }, 01:25:17.791 { 01:25:17.791 "name": "BaseBdev3", 01:25:17.791 "uuid": "58e72e10-a121-418e-805b-c49b6a3fdeeb", 01:25:17.791 "is_configured": true, 01:25:17.791 "data_offset": 0, 01:25:17.791 "data_size": 65536 01:25:17.791 }, 01:25:17.791 { 01:25:17.791 "name": "BaseBdev4", 01:25:17.791 "uuid": "3217d659-4b61-4215-943b-249a2eb8616a", 01:25:17.791 "is_configured": true, 01:25:17.791 "data_offset": 0, 01:25:17.791 "data_size": 65536 01:25:17.791 } 01:25:17.791 ] 01:25:17.791 }' 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:17.791 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.359 [2024-12-09 05:20:09.819502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:18.359 "name": "Existed_Raid", 01:25:18.359 "aliases": [ 01:25:18.359 "4f7a9a09-46a0-4aaa-ab68-3de1182c4413" 01:25:18.359 ], 01:25:18.359 "product_name": "Raid Volume", 01:25:18.359 "block_size": 512, 01:25:18.359 "num_blocks": 262144, 01:25:18.359 "uuid": "4f7a9a09-46a0-4aaa-ab68-3de1182c4413", 01:25:18.359 "assigned_rate_limits": { 01:25:18.359 "rw_ios_per_sec": 0, 01:25:18.359 "rw_mbytes_per_sec": 0, 01:25:18.359 "r_mbytes_per_sec": 0, 01:25:18.359 "w_mbytes_per_sec": 0 01:25:18.359 }, 01:25:18.359 "claimed": false, 01:25:18.359 "zoned": false, 01:25:18.359 "supported_io_types": { 01:25:18.359 "read": true, 01:25:18.359 "write": true, 01:25:18.359 "unmap": true, 01:25:18.359 "flush": true, 01:25:18.359 "reset": true, 01:25:18.359 "nvme_admin": false, 01:25:18.359 "nvme_io": false, 01:25:18.359 "nvme_io_md": false, 01:25:18.359 "write_zeroes": true, 01:25:18.359 "zcopy": false, 01:25:18.359 "get_zone_info": false, 01:25:18.359 "zone_management": false, 01:25:18.359 "zone_append": false, 01:25:18.359 "compare": false, 01:25:18.359 "compare_and_write": false, 01:25:18.359 "abort": false, 01:25:18.359 "seek_hole": false, 01:25:18.359 "seek_data": false, 01:25:18.359 "copy": false, 01:25:18.359 "nvme_iov_md": false 01:25:18.359 }, 01:25:18.359 "memory_domains": [ 01:25:18.359 { 01:25:18.359 "dma_device_id": "system", 01:25:18.359 "dma_device_type": 1 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:18.359 "dma_device_type": 2 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "system", 01:25:18.359 "dma_device_type": 1 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:18.359 "dma_device_type": 2 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "system", 01:25:18.359 "dma_device_type": 1 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:18.359 "dma_device_type": 2 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "system", 01:25:18.359 "dma_device_type": 1 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:18.359 "dma_device_type": 2 01:25:18.359 } 01:25:18.359 ], 01:25:18.359 "driver_specific": { 01:25:18.359 "raid": { 01:25:18.359 "uuid": "4f7a9a09-46a0-4aaa-ab68-3de1182c4413", 01:25:18.359 "strip_size_kb": 64, 01:25:18.359 "state": "online", 01:25:18.359 "raid_level": "concat", 01:25:18.359 "superblock": false, 01:25:18.359 "num_base_bdevs": 4, 01:25:18.359 "num_base_bdevs_discovered": 4, 01:25:18.359 "num_base_bdevs_operational": 4, 01:25:18.359 "base_bdevs_list": [ 01:25:18.359 { 01:25:18.359 "name": "BaseBdev1", 01:25:18.359 "uuid": "140e6ae1-c7b4-4cea-b277-c5738cff0636", 01:25:18.359 "is_configured": true, 01:25:18.359 "data_offset": 0, 01:25:18.359 "data_size": 65536 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "name": "BaseBdev2", 01:25:18.359 "uuid": "ba8c32ba-e680-454a-83cc-6143859ff2e0", 01:25:18.359 "is_configured": true, 01:25:18.359 "data_offset": 0, 01:25:18.359 "data_size": 65536 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "name": "BaseBdev3", 01:25:18.359 "uuid": "58e72e10-a121-418e-805b-c49b6a3fdeeb", 01:25:18.359 "is_configured": true, 01:25:18.359 "data_offset": 0, 01:25:18.359 "data_size": 65536 01:25:18.359 }, 01:25:18.359 { 01:25:18.359 "name": "BaseBdev4", 01:25:18.359 "uuid": "3217d659-4b61-4215-943b-249a2eb8616a", 01:25:18.359 "is_configured": true, 01:25:18.359 "data_offset": 0, 01:25:18.359 "data_size": 65536 01:25:18.359 } 01:25:18.359 ] 01:25:18.359 } 01:25:18.359 } 01:25:18.359 }' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:25:18.359 BaseBdev2 01:25:18.359 BaseBdev3 01:25:18.359 BaseBdev4' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.359 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.618 05:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.618 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.618 [2024-12-09 05:20:10.183171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:25:18.618 [2024-12-09 05:20:10.183371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:18.618 [2024-12-09 05:20:10.183560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:18.877 "name": "Existed_Raid", 01:25:18.877 "uuid": "4f7a9a09-46a0-4aaa-ab68-3de1182c4413", 01:25:18.877 "strip_size_kb": 64, 01:25:18.877 "state": "offline", 01:25:18.877 "raid_level": "concat", 01:25:18.877 "superblock": false, 01:25:18.877 "num_base_bdevs": 4, 01:25:18.877 "num_base_bdevs_discovered": 3, 01:25:18.877 "num_base_bdevs_operational": 3, 01:25:18.877 "base_bdevs_list": [ 01:25:18.877 { 01:25:18.877 "name": null, 01:25:18.877 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:18.877 "is_configured": false, 01:25:18.877 "data_offset": 0, 01:25:18.877 "data_size": 65536 01:25:18.877 }, 01:25:18.877 { 01:25:18.877 "name": "BaseBdev2", 01:25:18.877 "uuid": "ba8c32ba-e680-454a-83cc-6143859ff2e0", 01:25:18.877 "is_configured": true, 01:25:18.877 "data_offset": 0, 01:25:18.877 "data_size": 65536 01:25:18.877 }, 01:25:18.877 { 01:25:18.877 "name": "BaseBdev3", 01:25:18.877 "uuid": "58e72e10-a121-418e-805b-c49b6a3fdeeb", 01:25:18.877 "is_configured": true, 01:25:18.877 "data_offset": 0, 01:25:18.877 "data_size": 65536 01:25:18.877 }, 01:25:18.877 { 01:25:18.877 "name": "BaseBdev4", 01:25:18.877 "uuid": "3217d659-4b61-4215-943b-249a2eb8616a", 01:25:18.877 "is_configured": true, 01:25:18.877 "data_offset": 0, 01:25:18.877 "data_size": 65536 01:25:18.877 } 01:25:18.877 ] 01:25:18.877 }' 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:18.877 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.444 [2024-12-09 05:20:10.832264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.444 05:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.444 [2024-12-09 05:20:10.974342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.713 [2024-12-09 05:20:11.118630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:25:19.713 [2024-12-09 05:20:11.118858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.713 BaseBdev2 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:25:19.713 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.714 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.995 [ 01:25:19.995 { 01:25:19.995 "name": "BaseBdev2", 01:25:19.995 "aliases": [ 01:25:19.995 "94cf3c8b-99f5-47f2-986a-ca951b3c2e05" 01:25:19.995 ], 01:25:19.995 "product_name": "Malloc disk", 01:25:19.995 "block_size": 512, 01:25:19.995 "num_blocks": 65536, 01:25:19.995 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:19.995 "assigned_rate_limits": { 01:25:19.995 "rw_ios_per_sec": 0, 01:25:19.995 "rw_mbytes_per_sec": 0, 01:25:19.995 "r_mbytes_per_sec": 0, 01:25:19.995 "w_mbytes_per_sec": 0 01:25:19.995 }, 01:25:19.995 "claimed": false, 01:25:19.995 "zoned": false, 01:25:19.995 "supported_io_types": { 01:25:19.995 "read": true, 01:25:19.995 "write": true, 01:25:19.995 "unmap": true, 01:25:19.995 "flush": true, 01:25:19.995 "reset": true, 01:25:19.995 "nvme_admin": false, 01:25:19.995 "nvme_io": false, 01:25:19.995 "nvme_io_md": false, 01:25:19.995 "write_zeroes": true, 01:25:19.995 "zcopy": true, 01:25:19.995 "get_zone_info": false, 01:25:19.995 "zone_management": false, 01:25:19.995 "zone_append": false, 01:25:19.995 "compare": false, 01:25:19.995 "compare_and_write": false, 01:25:19.995 "abort": true, 01:25:19.995 "seek_hole": false, 01:25:19.995 "seek_data": false, 01:25:19.995 "copy": true, 01:25:19.995 "nvme_iov_md": false 01:25:19.995 }, 01:25:19.995 "memory_domains": [ 01:25:19.995 { 01:25:19.995 "dma_device_id": "system", 01:25:19.995 "dma_device_type": 1 01:25:19.995 }, 01:25:19.995 { 01:25:19.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:19.995 "dma_device_type": 2 01:25:19.995 } 01:25:19.995 ], 01:25:19.995 "driver_specific": {} 01:25:19.995 } 01:25:19.995 ] 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.995 BaseBdev3 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.995 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.996 [ 01:25:19.996 { 01:25:19.996 "name": "BaseBdev3", 01:25:19.996 "aliases": [ 01:25:19.996 "3f42b55a-e43b-40ec-872e-19214214680a" 01:25:19.996 ], 01:25:19.996 "product_name": "Malloc disk", 01:25:19.996 "block_size": 512, 01:25:19.996 "num_blocks": 65536, 01:25:19.996 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:19.996 "assigned_rate_limits": { 01:25:19.996 "rw_ios_per_sec": 0, 01:25:19.996 "rw_mbytes_per_sec": 0, 01:25:19.996 "r_mbytes_per_sec": 0, 01:25:19.996 "w_mbytes_per_sec": 0 01:25:19.996 }, 01:25:19.996 "claimed": false, 01:25:19.996 "zoned": false, 01:25:19.996 "supported_io_types": { 01:25:19.996 "read": true, 01:25:19.996 "write": true, 01:25:19.996 "unmap": true, 01:25:19.996 "flush": true, 01:25:19.996 "reset": true, 01:25:19.996 "nvme_admin": false, 01:25:19.996 "nvme_io": false, 01:25:19.996 "nvme_io_md": false, 01:25:19.996 "write_zeroes": true, 01:25:19.996 "zcopy": true, 01:25:19.996 "get_zone_info": false, 01:25:19.996 "zone_management": false, 01:25:19.996 "zone_append": false, 01:25:19.996 "compare": false, 01:25:19.996 "compare_and_write": false, 01:25:19.996 "abort": true, 01:25:19.996 "seek_hole": false, 01:25:19.996 "seek_data": false, 01:25:19.996 "copy": true, 01:25:19.996 "nvme_iov_md": false 01:25:19.996 }, 01:25:19.996 "memory_domains": [ 01:25:19.996 { 01:25:19.996 "dma_device_id": "system", 01:25:19.996 "dma_device_type": 1 01:25:19.996 }, 01:25:19.996 { 01:25:19.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:19.996 "dma_device_type": 2 01:25:19.996 } 01:25:19.996 ], 01:25:19.996 "driver_specific": {} 01:25:19.996 } 01:25:19.996 ] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.996 BaseBdev4 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.996 [ 01:25:19.996 { 01:25:19.996 "name": "BaseBdev4", 01:25:19.996 "aliases": [ 01:25:19.996 "03ad68db-3c19-4d23-ac45-d90e3846d8e7" 01:25:19.996 ], 01:25:19.996 "product_name": "Malloc disk", 01:25:19.996 "block_size": 512, 01:25:19.996 "num_blocks": 65536, 01:25:19.996 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:19.996 "assigned_rate_limits": { 01:25:19.996 "rw_ios_per_sec": 0, 01:25:19.996 "rw_mbytes_per_sec": 0, 01:25:19.996 "r_mbytes_per_sec": 0, 01:25:19.996 "w_mbytes_per_sec": 0 01:25:19.996 }, 01:25:19.996 "claimed": false, 01:25:19.996 "zoned": false, 01:25:19.996 "supported_io_types": { 01:25:19.996 "read": true, 01:25:19.996 "write": true, 01:25:19.996 "unmap": true, 01:25:19.996 "flush": true, 01:25:19.996 "reset": true, 01:25:19.996 "nvme_admin": false, 01:25:19.996 "nvme_io": false, 01:25:19.996 "nvme_io_md": false, 01:25:19.996 "write_zeroes": true, 01:25:19.996 "zcopy": true, 01:25:19.996 "get_zone_info": false, 01:25:19.996 "zone_management": false, 01:25:19.996 "zone_append": false, 01:25:19.996 "compare": false, 01:25:19.996 "compare_and_write": false, 01:25:19.996 "abort": true, 01:25:19.996 "seek_hole": false, 01:25:19.996 "seek_data": false, 01:25:19.996 "copy": true, 01:25:19.996 "nvme_iov_md": false 01:25:19.996 }, 01:25:19.996 "memory_domains": [ 01:25:19.996 { 01:25:19.996 "dma_device_id": "system", 01:25:19.996 "dma_device_type": 1 01:25:19.996 }, 01:25:19.996 { 01:25:19.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:19.996 "dma_device_type": 2 01:25:19.996 } 01:25:19.996 ], 01:25:19.996 "driver_specific": {} 01:25:19.996 } 01:25:19.996 ] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.996 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.996 [2024-12-09 05:20:11.497636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:19.996 [2024-12-09 05:20:11.497891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:19.996 [2024-12-09 05:20:11.498042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:19.996 [2024-12-09 05:20:11.500574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:19.996 [2024-12-09 05:20:11.500768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:19.997 "name": "Existed_Raid", 01:25:19.997 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:19.997 "strip_size_kb": 64, 01:25:19.997 "state": "configuring", 01:25:19.997 "raid_level": "concat", 01:25:19.997 "superblock": false, 01:25:19.997 "num_base_bdevs": 4, 01:25:19.997 "num_base_bdevs_discovered": 3, 01:25:19.997 "num_base_bdevs_operational": 4, 01:25:19.997 "base_bdevs_list": [ 01:25:19.997 { 01:25:19.997 "name": "BaseBdev1", 01:25:19.997 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:19.997 "is_configured": false, 01:25:19.997 "data_offset": 0, 01:25:19.997 "data_size": 0 01:25:19.997 }, 01:25:19.997 { 01:25:19.997 "name": "BaseBdev2", 01:25:19.997 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:19.997 "is_configured": true, 01:25:19.997 "data_offset": 0, 01:25:19.997 "data_size": 65536 01:25:19.997 }, 01:25:19.997 { 01:25:19.997 "name": "BaseBdev3", 01:25:19.997 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:19.997 "is_configured": true, 01:25:19.997 "data_offset": 0, 01:25:19.997 "data_size": 65536 01:25:19.997 }, 01:25:19.997 { 01:25:19.997 "name": "BaseBdev4", 01:25:19.997 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:19.997 "is_configured": true, 01:25:19.997 "data_offset": 0, 01:25:19.997 "data_size": 65536 01:25:19.997 } 01:25:19.997 ] 01:25:19.997 }' 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:19.997 05:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:20.577 [2024-12-09 05:20:12.053760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:20.577 "name": "Existed_Raid", 01:25:20.577 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:20.577 "strip_size_kb": 64, 01:25:20.577 "state": "configuring", 01:25:20.577 "raid_level": "concat", 01:25:20.577 "superblock": false, 01:25:20.577 "num_base_bdevs": 4, 01:25:20.577 "num_base_bdevs_discovered": 2, 01:25:20.577 "num_base_bdevs_operational": 4, 01:25:20.577 "base_bdevs_list": [ 01:25:20.577 { 01:25:20.577 "name": "BaseBdev1", 01:25:20.577 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:20.577 "is_configured": false, 01:25:20.577 "data_offset": 0, 01:25:20.577 "data_size": 0 01:25:20.577 }, 01:25:20.577 { 01:25:20.577 "name": null, 01:25:20.577 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:20.577 "is_configured": false, 01:25:20.577 "data_offset": 0, 01:25:20.577 "data_size": 65536 01:25:20.577 }, 01:25:20.577 { 01:25:20.577 "name": "BaseBdev3", 01:25:20.577 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:20.577 "is_configured": true, 01:25:20.577 "data_offset": 0, 01:25:20.577 "data_size": 65536 01:25:20.577 }, 01:25:20.577 { 01:25:20.577 "name": "BaseBdev4", 01:25:20.577 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:20.577 "is_configured": true, 01:25:20.577 "data_offset": 0, 01:25:20.577 "data_size": 65536 01:25:20.577 } 01:25:20.577 ] 01:25:20.577 }' 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:20.577 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.143 [2024-12-09 05:20:12.693093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:21.143 BaseBdev1 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.143 [ 01:25:21.143 { 01:25:21.143 "name": "BaseBdev1", 01:25:21.143 "aliases": [ 01:25:21.143 "701843d6-bdfa-4645-acbe-9a0b0e35721f" 01:25:21.143 ], 01:25:21.143 "product_name": "Malloc disk", 01:25:21.143 "block_size": 512, 01:25:21.143 "num_blocks": 65536, 01:25:21.143 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:21.143 "assigned_rate_limits": { 01:25:21.143 "rw_ios_per_sec": 0, 01:25:21.143 "rw_mbytes_per_sec": 0, 01:25:21.143 "r_mbytes_per_sec": 0, 01:25:21.143 "w_mbytes_per_sec": 0 01:25:21.143 }, 01:25:21.143 "claimed": true, 01:25:21.143 "claim_type": "exclusive_write", 01:25:21.143 "zoned": false, 01:25:21.143 "supported_io_types": { 01:25:21.143 "read": true, 01:25:21.143 "write": true, 01:25:21.143 "unmap": true, 01:25:21.143 "flush": true, 01:25:21.143 "reset": true, 01:25:21.143 "nvme_admin": false, 01:25:21.143 "nvme_io": false, 01:25:21.143 "nvme_io_md": false, 01:25:21.143 "write_zeroes": true, 01:25:21.143 "zcopy": true, 01:25:21.143 "get_zone_info": false, 01:25:21.143 "zone_management": false, 01:25:21.143 "zone_append": false, 01:25:21.143 "compare": false, 01:25:21.143 "compare_and_write": false, 01:25:21.143 "abort": true, 01:25:21.143 "seek_hole": false, 01:25:21.143 "seek_data": false, 01:25:21.143 "copy": true, 01:25:21.143 "nvme_iov_md": false 01:25:21.143 }, 01:25:21.143 "memory_domains": [ 01:25:21.143 { 01:25:21.143 "dma_device_id": "system", 01:25:21.143 "dma_device_type": 1 01:25:21.143 }, 01:25:21.143 { 01:25:21.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:21.143 "dma_device_type": 2 01:25:21.143 } 01:25:21.143 ], 01:25:21.143 "driver_specific": {} 01:25:21.143 } 01:25:21.143 ] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:21.143 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.144 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.402 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:21.402 "name": "Existed_Raid", 01:25:21.402 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:21.402 "strip_size_kb": 64, 01:25:21.402 "state": "configuring", 01:25:21.402 "raid_level": "concat", 01:25:21.402 "superblock": false, 01:25:21.402 "num_base_bdevs": 4, 01:25:21.402 "num_base_bdevs_discovered": 3, 01:25:21.402 "num_base_bdevs_operational": 4, 01:25:21.402 "base_bdevs_list": [ 01:25:21.402 { 01:25:21.402 "name": "BaseBdev1", 01:25:21.402 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:21.402 "is_configured": true, 01:25:21.402 "data_offset": 0, 01:25:21.402 "data_size": 65536 01:25:21.402 }, 01:25:21.402 { 01:25:21.402 "name": null, 01:25:21.402 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:21.402 "is_configured": false, 01:25:21.402 "data_offset": 0, 01:25:21.402 "data_size": 65536 01:25:21.402 }, 01:25:21.402 { 01:25:21.402 "name": "BaseBdev3", 01:25:21.402 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:21.402 "is_configured": true, 01:25:21.402 "data_offset": 0, 01:25:21.402 "data_size": 65536 01:25:21.402 }, 01:25:21.402 { 01:25:21.402 "name": "BaseBdev4", 01:25:21.402 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:21.402 "is_configured": true, 01:25:21.402 "data_offset": 0, 01:25:21.402 "data_size": 65536 01:25:21.402 } 01:25:21.402 ] 01:25:21.402 }' 01:25:21.402 05:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:21.402 05:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.660 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:21.660 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.660 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:25:21.660 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.660 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.919 [2024-12-09 05:20:13.313384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:21.919 "name": "Existed_Raid", 01:25:21.919 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:21.919 "strip_size_kb": 64, 01:25:21.919 "state": "configuring", 01:25:21.919 "raid_level": "concat", 01:25:21.919 "superblock": false, 01:25:21.919 "num_base_bdevs": 4, 01:25:21.919 "num_base_bdevs_discovered": 2, 01:25:21.919 "num_base_bdevs_operational": 4, 01:25:21.919 "base_bdevs_list": [ 01:25:21.919 { 01:25:21.919 "name": "BaseBdev1", 01:25:21.919 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:21.919 "is_configured": true, 01:25:21.919 "data_offset": 0, 01:25:21.919 "data_size": 65536 01:25:21.919 }, 01:25:21.919 { 01:25:21.919 "name": null, 01:25:21.919 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:21.919 "is_configured": false, 01:25:21.919 "data_offset": 0, 01:25:21.919 "data_size": 65536 01:25:21.919 }, 01:25:21.919 { 01:25:21.919 "name": null, 01:25:21.919 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:21.919 "is_configured": false, 01:25:21.919 "data_offset": 0, 01:25:21.919 "data_size": 65536 01:25:21.919 }, 01:25:21.919 { 01:25:21.919 "name": "BaseBdev4", 01:25:21.919 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:21.919 "is_configured": true, 01:25:21.919 "data_offset": 0, 01:25:21.919 "data_size": 65536 01:25:21.919 } 01:25:21.919 ] 01:25:21.919 }' 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:21.919 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:22.504 [2024-12-09 05:20:13.909629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:22.504 "name": "Existed_Raid", 01:25:22.504 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:22.504 "strip_size_kb": 64, 01:25:22.504 "state": "configuring", 01:25:22.504 "raid_level": "concat", 01:25:22.504 "superblock": false, 01:25:22.504 "num_base_bdevs": 4, 01:25:22.504 "num_base_bdevs_discovered": 3, 01:25:22.504 "num_base_bdevs_operational": 4, 01:25:22.504 "base_bdevs_list": [ 01:25:22.504 { 01:25:22.504 "name": "BaseBdev1", 01:25:22.504 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:22.504 "is_configured": true, 01:25:22.504 "data_offset": 0, 01:25:22.504 "data_size": 65536 01:25:22.504 }, 01:25:22.504 { 01:25:22.504 "name": null, 01:25:22.504 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:22.504 "is_configured": false, 01:25:22.504 "data_offset": 0, 01:25:22.504 "data_size": 65536 01:25:22.504 }, 01:25:22.504 { 01:25:22.504 "name": "BaseBdev3", 01:25:22.504 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:22.504 "is_configured": true, 01:25:22.504 "data_offset": 0, 01:25:22.504 "data_size": 65536 01:25:22.504 }, 01:25:22.504 { 01:25:22.504 "name": "BaseBdev4", 01:25:22.504 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:22.504 "is_configured": true, 01:25:22.504 "data_offset": 0, 01:25:22.504 "data_size": 65536 01:25:22.504 } 01:25:22.504 ] 01:25:22.504 }' 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:22.504 05:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.070 [2024-12-09 05:20:14.489834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:23.070 "name": "Existed_Raid", 01:25:23.070 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:23.070 "strip_size_kb": 64, 01:25:23.070 "state": "configuring", 01:25:23.070 "raid_level": "concat", 01:25:23.070 "superblock": false, 01:25:23.070 "num_base_bdevs": 4, 01:25:23.070 "num_base_bdevs_discovered": 2, 01:25:23.070 "num_base_bdevs_operational": 4, 01:25:23.070 "base_bdevs_list": [ 01:25:23.070 { 01:25:23.070 "name": null, 01:25:23.070 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:23.070 "is_configured": false, 01:25:23.070 "data_offset": 0, 01:25:23.070 "data_size": 65536 01:25:23.070 }, 01:25:23.070 { 01:25:23.070 "name": null, 01:25:23.070 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:23.070 "is_configured": false, 01:25:23.070 "data_offset": 0, 01:25:23.070 "data_size": 65536 01:25:23.070 }, 01:25:23.070 { 01:25:23.070 "name": "BaseBdev3", 01:25:23.070 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:23.070 "is_configured": true, 01:25:23.070 "data_offset": 0, 01:25:23.070 "data_size": 65536 01:25:23.070 }, 01:25:23.070 { 01:25:23.070 "name": "BaseBdev4", 01:25:23.070 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:23.070 "is_configured": true, 01:25:23.070 "data_offset": 0, 01:25:23.070 "data_size": 65536 01:25:23.070 } 01:25:23.070 ] 01:25:23.070 }' 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:23.070 05:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.634 [2024-12-09 05:20:15.123321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:23.634 "name": "Existed_Raid", 01:25:23.634 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:23.634 "strip_size_kb": 64, 01:25:23.634 "state": "configuring", 01:25:23.634 "raid_level": "concat", 01:25:23.634 "superblock": false, 01:25:23.634 "num_base_bdevs": 4, 01:25:23.634 "num_base_bdevs_discovered": 3, 01:25:23.634 "num_base_bdevs_operational": 4, 01:25:23.634 "base_bdevs_list": [ 01:25:23.634 { 01:25:23.634 "name": null, 01:25:23.634 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:23.634 "is_configured": false, 01:25:23.634 "data_offset": 0, 01:25:23.634 "data_size": 65536 01:25:23.634 }, 01:25:23.634 { 01:25:23.634 "name": "BaseBdev2", 01:25:23.634 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:23.634 "is_configured": true, 01:25:23.634 "data_offset": 0, 01:25:23.634 "data_size": 65536 01:25:23.634 }, 01:25:23.634 { 01:25:23.634 "name": "BaseBdev3", 01:25:23.634 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:23.634 "is_configured": true, 01:25:23.634 "data_offset": 0, 01:25:23.634 "data_size": 65536 01:25:23.634 }, 01:25:23.634 { 01:25:23.634 "name": "BaseBdev4", 01:25:23.634 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:23.634 "is_configured": true, 01:25:23.634 "data_offset": 0, 01:25:23.634 "data_size": 65536 01:25:23.634 } 01:25:23.634 ] 01:25:23.634 }' 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:23.634 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 701843d6-bdfa-4645-acbe-9a0b0e35721f 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.198 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.198 [2024-12-09 05:20:15.773612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:25:24.198 [2024-12-09 05:20:15.773687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:25:24.199 [2024-12-09 05:20:15.773699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 01:25:24.199 [2024-12-09 05:20:15.774060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:25:24.199 [2024-12-09 05:20:15.774234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:25:24.199 [2024-12-09 05:20:15.774253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:25:24.199 [2024-12-09 05:20:15.774616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:24.199 NewBaseBdev 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.199 [ 01:25:24.199 { 01:25:24.199 "name": "NewBaseBdev", 01:25:24.199 "aliases": [ 01:25:24.199 "701843d6-bdfa-4645-acbe-9a0b0e35721f" 01:25:24.199 ], 01:25:24.199 "product_name": "Malloc disk", 01:25:24.199 "block_size": 512, 01:25:24.199 "num_blocks": 65536, 01:25:24.199 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:24.199 "assigned_rate_limits": { 01:25:24.199 "rw_ios_per_sec": 0, 01:25:24.199 "rw_mbytes_per_sec": 0, 01:25:24.199 "r_mbytes_per_sec": 0, 01:25:24.199 "w_mbytes_per_sec": 0 01:25:24.199 }, 01:25:24.199 "claimed": true, 01:25:24.199 "claim_type": "exclusive_write", 01:25:24.199 "zoned": false, 01:25:24.199 "supported_io_types": { 01:25:24.199 "read": true, 01:25:24.199 "write": true, 01:25:24.199 "unmap": true, 01:25:24.199 "flush": true, 01:25:24.199 "reset": true, 01:25:24.199 "nvme_admin": false, 01:25:24.199 "nvme_io": false, 01:25:24.199 "nvme_io_md": false, 01:25:24.199 "write_zeroes": true, 01:25:24.199 "zcopy": true, 01:25:24.199 "get_zone_info": false, 01:25:24.199 "zone_management": false, 01:25:24.199 "zone_append": false, 01:25:24.199 "compare": false, 01:25:24.199 "compare_and_write": false, 01:25:24.199 "abort": true, 01:25:24.199 "seek_hole": false, 01:25:24.199 "seek_data": false, 01:25:24.199 "copy": true, 01:25:24.199 "nvme_iov_md": false 01:25:24.199 }, 01:25:24.199 "memory_domains": [ 01:25:24.199 { 01:25:24.199 "dma_device_id": "system", 01:25:24.199 "dma_device_type": 1 01:25:24.199 }, 01:25:24.199 { 01:25:24.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:24.199 "dma_device_type": 2 01:25:24.199 } 01:25:24.199 ], 01:25:24.199 "driver_specific": {} 01:25:24.199 } 01:25:24.199 ] 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:24.199 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:24.457 "name": "Existed_Raid", 01:25:24.457 "uuid": "584a607a-bfed-48db-93df-bc4b76a5cdf2", 01:25:24.457 "strip_size_kb": 64, 01:25:24.457 "state": "online", 01:25:24.457 "raid_level": "concat", 01:25:24.457 "superblock": false, 01:25:24.457 "num_base_bdevs": 4, 01:25:24.457 "num_base_bdevs_discovered": 4, 01:25:24.457 "num_base_bdevs_operational": 4, 01:25:24.457 "base_bdevs_list": [ 01:25:24.457 { 01:25:24.457 "name": "NewBaseBdev", 01:25:24.457 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:24.457 "is_configured": true, 01:25:24.457 "data_offset": 0, 01:25:24.457 "data_size": 65536 01:25:24.457 }, 01:25:24.457 { 01:25:24.457 "name": "BaseBdev2", 01:25:24.457 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:24.457 "is_configured": true, 01:25:24.457 "data_offset": 0, 01:25:24.457 "data_size": 65536 01:25:24.457 }, 01:25:24.457 { 01:25:24.457 "name": "BaseBdev3", 01:25:24.457 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:24.457 "is_configured": true, 01:25:24.457 "data_offset": 0, 01:25:24.457 "data_size": 65536 01:25:24.457 }, 01:25:24.457 { 01:25:24.457 "name": "BaseBdev4", 01:25:24.457 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:24.457 "is_configured": true, 01:25:24.457 "data_offset": 0, 01:25:24.457 "data_size": 65536 01:25:24.457 } 01:25:24.457 ] 01:25:24.457 }' 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:24.457 05:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:24.715 [2024-12-09 05:20:16.294262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:24.715 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.973 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:24.973 "name": "Existed_Raid", 01:25:24.973 "aliases": [ 01:25:24.973 "584a607a-bfed-48db-93df-bc4b76a5cdf2" 01:25:24.973 ], 01:25:24.973 "product_name": "Raid Volume", 01:25:24.973 "block_size": 512, 01:25:24.973 "num_blocks": 262144, 01:25:24.973 "uuid": "584a607a-bfed-48db-93df-bc4b76a5cdf2", 01:25:24.973 "assigned_rate_limits": { 01:25:24.973 "rw_ios_per_sec": 0, 01:25:24.973 "rw_mbytes_per_sec": 0, 01:25:24.973 "r_mbytes_per_sec": 0, 01:25:24.973 "w_mbytes_per_sec": 0 01:25:24.973 }, 01:25:24.973 "claimed": false, 01:25:24.973 "zoned": false, 01:25:24.974 "supported_io_types": { 01:25:24.974 "read": true, 01:25:24.974 "write": true, 01:25:24.974 "unmap": true, 01:25:24.974 "flush": true, 01:25:24.974 "reset": true, 01:25:24.974 "nvme_admin": false, 01:25:24.974 "nvme_io": false, 01:25:24.974 "nvme_io_md": false, 01:25:24.974 "write_zeroes": true, 01:25:24.974 "zcopy": false, 01:25:24.974 "get_zone_info": false, 01:25:24.974 "zone_management": false, 01:25:24.974 "zone_append": false, 01:25:24.974 "compare": false, 01:25:24.974 "compare_and_write": false, 01:25:24.974 "abort": false, 01:25:24.974 "seek_hole": false, 01:25:24.974 "seek_data": false, 01:25:24.974 "copy": false, 01:25:24.974 "nvme_iov_md": false 01:25:24.974 }, 01:25:24.974 "memory_domains": [ 01:25:24.974 { 01:25:24.974 "dma_device_id": "system", 01:25:24.974 "dma_device_type": 1 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:24.974 "dma_device_type": 2 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "system", 01:25:24.974 "dma_device_type": 1 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:24.974 "dma_device_type": 2 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "system", 01:25:24.974 "dma_device_type": 1 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:24.974 "dma_device_type": 2 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "system", 01:25:24.974 "dma_device_type": 1 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:24.974 "dma_device_type": 2 01:25:24.974 } 01:25:24.974 ], 01:25:24.974 "driver_specific": { 01:25:24.974 "raid": { 01:25:24.974 "uuid": "584a607a-bfed-48db-93df-bc4b76a5cdf2", 01:25:24.974 "strip_size_kb": 64, 01:25:24.974 "state": "online", 01:25:24.974 "raid_level": "concat", 01:25:24.974 "superblock": false, 01:25:24.974 "num_base_bdevs": 4, 01:25:24.974 "num_base_bdevs_discovered": 4, 01:25:24.974 "num_base_bdevs_operational": 4, 01:25:24.974 "base_bdevs_list": [ 01:25:24.974 { 01:25:24.974 "name": "NewBaseBdev", 01:25:24.974 "uuid": "701843d6-bdfa-4645-acbe-9a0b0e35721f", 01:25:24.974 "is_configured": true, 01:25:24.974 "data_offset": 0, 01:25:24.974 "data_size": 65536 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "name": "BaseBdev2", 01:25:24.974 "uuid": "94cf3c8b-99f5-47f2-986a-ca951b3c2e05", 01:25:24.974 "is_configured": true, 01:25:24.974 "data_offset": 0, 01:25:24.974 "data_size": 65536 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "name": "BaseBdev3", 01:25:24.974 "uuid": "3f42b55a-e43b-40ec-872e-19214214680a", 01:25:24.974 "is_configured": true, 01:25:24.974 "data_offset": 0, 01:25:24.974 "data_size": 65536 01:25:24.974 }, 01:25:24.974 { 01:25:24.974 "name": "BaseBdev4", 01:25:24.974 "uuid": "03ad68db-3c19-4d23-ac45-d90e3846d8e7", 01:25:24.974 "is_configured": true, 01:25:24.974 "data_offset": 0, 01:25:24.974 "data_size": 65536 01:25:24.974 } 01:25:24.974 ] 01:25:24.974 } 01:25:24.974 } 01:25:24.974 }' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:25:24.974 BaseBdev2 01:25:24.974 BaseBdev3 01:25:24.974 BaseBdev4' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:24.974 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:25.233 [2024-12-09 05:20:16.653923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:25.233 [2024-12-09 05:20:16.654100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:25.233 [2024-12-09 05:20:16.654313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:25.233 [2024-12-09 05:20:16.654577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:25.233 [2024-12-09 05:20:16.654606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71325 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71325 ']' 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71325 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71325 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71325' 01:25:25.233 killing process with pid 71325 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71325 01:25:25.233 [2024-12-09 05:20:16.689654] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:25.233 05:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71325 01:25:25.492 [2024-12-09 05:20:17.053020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:25:26.868 01:25:26.868 real 0m12.979s 01:25:26.868 user 0m21.407s 01:25:26.868 sys 0m1.780s 01:25:26.868 ************************************ 01:25:26.868 END TEST raid_state_function_test 01:25:26.868 ************************************ 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:26.868 05:20:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 01:25:26.868 05:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:26.868 05:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:26.868 05:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:26.868 ************************************ 01:25:26.868 START TEST raid_state_function_test_sb 01:25:26.868 ************************************ 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:25:26.868 Process raid pid: 72012 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72012 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72012' 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72012 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:25:26.868 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72012 ']' 01:25:26.869 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:26.869 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:26.869 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:26.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:26.869 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:26.869 05:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:26.869 [2024-12-09 05:20:18.372343] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:26.869 [2024-12-09 05:20:18.372716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:27.127 [2024-12-09 05:20:18.552122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:27.127 [2024-12-09 05:20:18.686317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:27.386 [2024-12-09 05:20:18.925051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:27.386 [2024-12-09 05:20:18.925407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:27.957 [2024-12-09 05:20:19.360114] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:27.957 [2024-12-09 05:20:19.360188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:27.957 [2024-12-09 05:20:19.360207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:27.957 [2024-12-09 05:20:19.360225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:27.957 [2024-12-09 05:20:19.360236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:27.957 [2024-12-09 05:20:19.360251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:27.957 [2024-12-09 05:20:19.360261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:27.957 [2024-12-09 05:20:19.360277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:27.957 "name": "Existed_Raid", 01:25:27.957 "uuid": "b5f6a6ad-3c46-4c6a-a33c-941b7b552418", 01:25:27.957 "strip_size_kb": 64, 01:25:27.957 "state": "configuring", 01:25:27.957 "raid_level": "concat", 01:25:27.957 "superblock": true, 01:25:27.957 "num_base_bdevs": 4, 01:25:27.957 "num_base_bdevs_discovered": 0, 01:25:27.957 "num_base_bdevs_operational": 4, 01:25:27.957 "base_bdevs_list": [ 01:25:27.957 { 01:25:27.957 "name": "BaseBdev1", 01:25:27.957 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:27.957 "is_configured": false, 01:25:27.957 "data_offset": 0, 01:25:27.957 "data_size": 0 01:25:27.957 }, 01:25:27.957 { 01:25:27.957 "name": "BaseBdev2", 01:25:27.957 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:27.957 "is_configured": false, 01:25:27.957 "data_offset": 0, 01:25:27.957 "data_size": 0 01:25:27.957 }, 01:25:27.957 { 01:25:27.957 "name": "BaseBdev3", 01:25:27.957 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:27.957 "is_configured": false, 01:25:27.957 "data_offset": 0, 01:25:27.957 "data_size": 0 01:25:27.957 }, 01:25:27.957 { 01:25:27.957 "name": "BaseBdev4", 01:25:27.957 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:27.957 "is_configured": false, 01:25:27.957 "data_offset": 0, 01:25:27.957 "data_size": 0 01:25:27.957 } 01:25:27.957 ] 01:25:27.957 }' 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:27.957 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.524 [2024-12-09 05:20:19.868185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:28.524 [2024-12-09 05:20:19.868235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.524 [2024-12-09 05:20:19.876206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:28.524 [2024-12-09 05:20:19.876275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:28.524 [2024-12-09 05:20:19.876291] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:28.524 [2024-12-09 05:20:19.876307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:28.524 [2024-12-09 05:20:19.876317] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:28.524 [2024-12-09 05:20:19.876331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:28.524 [2024-12-09 05:20:19.876342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:28.524 [2024-12-09 05:20:19.876356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.524 [2024-12-09 05:20:19.921178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:28.524 BaseBdev1 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.524 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.524 [ 01:25:28.524 { 01:25:28.524 "name": "BaseBdev1", 01:25:28.524 "aliases": [ 01:25:28.524 "95681261-3dd0-49cf-bc4f-527084255cf8" 01:25:28.524 ], 01:25:28.524 "product_name": "Malloc disk", 01:25:28.524 "block_size": 512, 01:25:28.524 "num_blocks": 65536, 01:25:28.524 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:28.524 "assigned_rate_limits": { 01:25:28.524 "rw_ios_per_sec": 0, 01:25:28.524 "rw_mbytes_per_sec": 0, 01:25:28.524 "r_mbytes_per_sec": 0, 01:25:28.524 "w_mbytes_per_sec": 0 01:25:28.524 }, 01:25:28.524 "claimed": true, 01:25:28.524 "claim_type": "exclusive_write", 01:25:28.524 "zoned": false, 01:25:28.524 "supported_io_types": { 01:25:28.524 "read": true, 01:25:28.524 "write": true, 01:25:28.524 "unmap": true, 01:25:28.524 "flush": true, 01:25:28.524 "reset": true, 01:25:28.524 "nvme_admin": false, 01:25:28.524 "nvme_io": false, 01:25:28.524 "nvme_io_md": false, 01:25:28.524 "write_zeroes": true, 01:25:28.524 "zcopy": true, 01:25:28.524 "get_zone_info": false, 01:25:28.524 "zone_management": false, 01:25:28.524 "zone_append": false, 01:25:28.524 "compare": false, 01:25:28.524 "compare_and_write": false, 01:25:28.524 "abort": true, 01:25:28.524 "seek_hole": false, 01:25:28.524 "seek_data": false, 01:25:28.524 "copy": true, 01:25:28.524 "nvme_iov_md": false 01:25:28.524 }, 01:25:28.524 "memory_domains": [ 01:25:28.524 { 01:25:28.524 "dma_device_id": "system", 01:25:28.524 "dma_device_type": 1 01:25:28.524 }, 01:25:28.524 { 01:25:28.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:28.524 "dma_device_type": 2 01:25:28.524 } 01:25:28.524 ], 01:25:28.524 "driver_specific": {} 01:25:28.524 } 01:25:28.524 ] 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.525 05:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:28.525 "name": "Existed_Raid", 01:25:28.525 "uuid": "ffaa1544-94c3-4281-adc5-a5f56ae50074", 01:25:28.525 "strip_size_kb": 64, 01:25:28.525 "state": "configuring", 01:25:28.525 "raid_level": "concat", 01:25:28.525 "superblock": true, 01:25:28.525 "num_base_bdevs": 4, 01:25:28.525 "num_base_bdevs_discovered": 1, 01:25:28.525 "num_base_bdevs_operational": 4, 01:25:28.525 "base_bdevs_list": [ 01:25:28.525 { 01:25:28.525 "name": "BaseBdev1", 01:25:28.525 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:28.525 "is_configured": true, 01:25:28.525 "data_offset": 2048, 01:25:28.525 "data_size": 63488 01:25:28.525 }, 01:25:28.525 { 01:25:28.525 "name": "BaseBdev2", 01:25:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:28.525 "is_configured": false, 01:25:28.525 "data_offset": 0, 01:25:28.525 "data_size": 0 01:25:28.525 }, 01:25:28.525 { 01:25:28.525 "name": "BaseBdev3", 01:25:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:28.525 "is_configured": false, 01:25:28.525 "data_offset": 0, 01:25:28.525 "data_size": 0 01:25:28.525 }, 01:25:28.525 { 01:25:28.525 "name": "BaseBdev4", 01:25:28.525 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:28.525 "is_configured": false, 01:25:28.525 "data_offset": 0, 01:25:28.525 "data_size": 0 01:25:28.525 } 01:25:28.525 ] 01:25:28.525 }' 01:25:28.525 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:28.525 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.091 [2024-12-09 05:20:20.477389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:29.091 [2024-12-09 05:20:20.477474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.091 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.091 [2024-12-09 05:20:20.485455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:29.092 [2024-12-09 05:20:20.488025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:29.092 [2024-12-09 05:20:20.488099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:29.092 [2024-12-09 05:20:20.488117] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:29.092 [2024-12-09 05:20:20.488136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:29.092 [2024-12-09 05:20:20.488147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:29.092 [2024-12-09 05:20:20.488162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:29.092 "name": "Existed_Raid", 01:25:29.092 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:29.092 "strip_size_kb": 64, 01:25:29.092 "state": "configuring", 01:25:29.092 "raid_level": "concat", 01:25:29.092 "superblock": true, 01:25:29.092 "num_base_bdevs": 4, 01:25:29.092 "num_base_bdevs_discovered": 1, 01:25:29.092 "num_base_bdevs_operational": 4, 01:25:29.092 "base_bdevs_list": [ 01:25:29.092 { 01:25:29.092 "name": "BaseBdev1", 01:25:29.092 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:29.092 "is_configured": true, 01:25:29.092 "data_offset": 2048, 01:25:29.092 "data_size": 63488 01:25:29.092 }, 01:25:29.092 { 01:25:29.092 "name": "BaseBdev2", 01:25:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:29.092 "is_configured": false, 01:25:29.092 "data_offset": 0, 01:25:29.092 "data_size": 0 01:25:29.092 }, 01:25:29.092 { 01:25:29.092 "name": "BaseBdev3", 01:25:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:29.092 "is_configured": false, 01:25:29.092 "data_offset": 0, 01:25:29.092 "data_size": 0 01:25:29.092 }, 01:25:29.092 { 01:25:29.092 "name": "BaseBdev4", 01:25:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:29.092 "is_configured": false, 01:25:29.092 "data_offset": 0, 01:25:29.092 "data_size": 0 01:25:29.092 } 01:25:29.092 ] 01:25:29.092 }' 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:29.092 05:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.659 [2024-12-09 05:20:21.056661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:29.659 BaseBdev2 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.659 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.659 [ 01:25:29.659 { 01:25:29.659 "name": "BaseBdev2", 01:25:29.659 "aliases": [ 01:25:29.659 "8fc8685b-46ad-4327-86e4-3c095f64911e" 01:25:29.659 ], 01:25:29.659 "product_name": "Malloc disk", 01:25:29.659 "block_size": 512, 01:25:29.659 "num_blocks": 65536, 01:25:29.659 "uuid": "8fc8685b-46ad-4327-86e4-3c095f64911e", 01:25:29.659 "assigned_rate_limits": { 01:25:29.659 "rw_ios_per_sec": 0, 01:25:29.659 "rw_mbytes_per_sec": 0, 01:25:29.659 "r_mbytes_per_sec": 0, 01:25:29.659 "w_mbytes_per_sec": 0 01:25:29.659 }, 01:25:29.659 "claimed": true, 01:25:29.659 "claim_type": "exclusive_write", 01:25:29.659 "zoned": false, 01:25:29.659 "supported_io_types": { 01:25:29.659 "read": true, 01:25:29.659 "write": true, 01:25:29.659 "unmap": true, 01:25:29.659 "flush": true, 01:25:29.659 "reset": true, 01:25:29.659 "nvme_admin": false, 01:25:29.660 "nvme_io": false, 01:25:29.660 "nvme_io_md": false, 01:25:29.660 "write_zeroes": true, 01:25:29.660 "zcopy": true, 01:25:29.660 "get_zone_info": false, 01:25:29.660 "zone_management": false, 01:25:29.660 "zone_append": false, 01:25:29.660 "compare": false, 01:25:29.660 "compare_and_write": false, 01:25:29.660 "abort": true, 01:25:29.660 "seek_hole": false, 01:25:29.660 "seek_data": false, 01:25:29.660 "copy": true, 01:25:29.660 "nvme_iov_md": false 01:25:29.660 }, 01:25:29.660 "memory_domains": [ 01:25:29.660 { 01:25:29.660 "dma_device_id": "system", 01:25:29.660 "dma_device_type": 1 01:25:29.660 }, 01:25:29.660 { 01:25:29.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:29.660 "dma_device_type": 2 01:25:29.660 } 01:25:29.660 ], 01:25:29.660 "driver_specific": {} 01:25:29.660 } 01:25:29.660 ] 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:29.660 "name": "Existed_Raid", 01:25:29.660 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:29.660 "strip_size_kb": 64, 01:25:29.660 "state": "configuring", 01:25:29.660 "raid_level": "concat", 01:25:29.660 "superblock": true, 01:25:29.660 "num_base_bdevs": 4, 01:25:29.660 "num_base_bdevs_discovered": 2, 01:25:29.660 "num_base_bdevs_operational": 4, 01:25:29.660 "base_bdevs_list": [ 01:25:29.660 { 01:25:29.660 "name": "BaseBdev1", 01:25:29.660 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:29.660 "is_configured": true, 01:25:29.660 "data_offset": 2048, 01:25:29.660 "data_size": 63488 01:25:29.660 }, 01:25:29.660 { 01:25:29.660 "name": "BaseBdev2", 01:25:29.660 "uuid": "8fc8685b-46ad-4327-86e4-3c095f64911e", 01:25:29.660 "is_configured": true, 01:25:29.660 "data_offset": 2048, 01:25:29.660 "data_size": 63488 01:25:29.660 }, 01:25:29.660 { 01:25:29.660 "name": "BaseBdev3", 01:25:29.660 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:29.660 "is_configured": false, 01:25:29.660 "data_offset": 0, 01:25:29.660 "data_size": 0 01:25:29.660 }, 01:25:29.660 { 01:25:29.660 "name": "BaseBdev4", 01:25:29.660 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:29.660 "is_configured": false, 01:25:29.660 "data_offset": 0, 01:25:29.660 "data_size": 0 01:25:29.660 } 01:25:29.660 ] 01:25:29.660 }' 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:29.660 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.227 [2024-12-09 05:20:21.641400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:30.227 BaseBdev3 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.227 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.228 [ 01:25:30.228 { 01:25:30.228 "name": "BaseBdev3", 01:25:30.228 "aliases": [ 01:25:30.228 "2ef31736-ff24-4c96-af69-5fc67fcce591" 01:25:30.228 ], 01:25:30.228 "product_name": "Malloc disk", 01:25:30.228 "block_size": 512, 01:25:30.228 "num_blocks": 65536, 01:25:30.228 "uuid": "2ef31736-ff24-4c96-af69-5fc67fcce591", 01:25:30.228 "assigned_rate_limits": { 01:25:30.228 "rw_ios_per_sec": 0, 01:25:30.228 "rw_mbytes_per_sec": 0, 01:25:30.228 "r_mbytes_per_sec": 0, 01:25:30.228 "w_mbytes_per_sec": 0 01:25:30.228 }, 01:25:30.228 "claimed": true, 01:25:30.228 "claim_type": "exclusive_write", 01:25:30.228 "zoned": false, 01:25:30.228 "supported_io_types": { 01:25:30.228 "read": true, 01:25:30.228 "write": true, 01:25:30.228 "unmap": true, 01:25:30.228 "flush": true, 01:25:30.228 "reset": true, 01:25:30.228 "nvme_admin": false, 01:25:30.228 "nvme_io": false, 01:25:30.228 "nvme_io_md": false, 01:25:30.228 "write_zeroes": true, 01:25:30.228 "zcopy": true, 01:25:30.228 "get_zone_info": false, 01:25:30.228 "zone_management": false, 01:25:30.228 "zone_append": false, 01:25:30.228 "compare": false, 01:25:30.228 "compare_and_write": false, 01:25:30.228 "abort": true, 01:25:30.228 "seek_hole": false, 01:25:30.228 "seek_data": false, 01:25:30.228 "copy": true, 01:25:30.228 "nvme_iov_md": false 01:25:30.228 }, 01:25:30.228 "memory_domains": [ 01:25:30.228 { 01:25:30.228 "dma_device_id": "system", 01:25:30.228 "dma_device_type": 1 01:25:30.228 }, 01:25:30.228 { 01:25:30.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:30.228 "dma_device_type": 2 01:25:30.228 } 01:25:30.228 ], 01:25:30.228 "driver_specific": {} 01:25:30.228 } 01:25:30.228 ] 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:30.228 "name": "Existed_Raid", 01:25:30.228 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:30.228 "strip_size_kb": 64, 01:25:30.228 "state": "configuring", 01:25:30.228 "raid_level": "concat", 01:25:30.228 "superblock": true, 01:25:30.228 "num_base_bdevs": 4, 01:25:30.228 "num_base_bdevs_discovered": 3, 01:25:30.228 "num_base_bdevs_operational": 4, 01:25:30.228 "base_bdevs_list": [ 01:25:30.228 { 01:25:30.228 "name": "BaseBdev1", 01:25:30.228 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:30.228 "is_configured": true, 01:25:30.228 "data_offset": 2048, 01:25:30.228 "data_size": 63488 01:25:30.228 }, 01:25:30.228 { 01:25:30.228 "name": "BaseBdev2", 01:25:30.228 "uuid": "8fc8685b-46ad-4327-86e4-3c095f64911e", 01:25:30.228 "is_configured": true, 01:25:30.228 "data_offset": 2048, 01:25:30.228 "data_size": 63488 01:25:30.228 }, 01:25:30.228 { 01:25:30.228 "name": "BaseBdev3", 01:25:30.228 "uuid": "2ef31736-ff24-4c96-af69-5fc67fcce591", 01:25:30.228 "is_configured": true, 01:25:30.228 "data_offset": 2048, 01:25:30.228 "data_size": 63488 01:25:30.228 }, 01:25:30.228 { 01:25:30.228 "name": "BaseBdev4", 01:25:30.228 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:30.228 "is_configured": false, 01:25:30.228 "data_offset": 0, 01:25:30.228 "data_size": 0 01:25:30.228 } 01:25:30.228 ] 01:25:30.228 }' 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:30.228 05:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.795 [2024-12-09 05:20:22.228002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:30.795 [2024-12-09 05:20:22.228339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:25:30.795 [2024-12-09 05:20:22.228377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:30.795 [2024-12-09 05:20:22.228752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:25:30.795 [2024-12-09 05:20:22.228954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:25:30.795 [2024-12-09 05:20:22.228982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:25:30.795 [2024-12-09 05:20:22.229164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:30.795 BaseBdev4 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.795 [ 01:25:30.795 { 01:25:30.795 "name": "BaseBdev4", 01:25:30.795 "aliases": [ 01:25:30.795 "b7bc468e-5f08-47a7-b086-c181efed1f1e" 01:25:30.795 ], 01:25:30.795 "product_name": "Malloc disk", 01:25:30.795 "block_size": 512, 01:25:30.795 "num_blocks": 65536, 01:25:30.795 "uuid": "b7bc468e-5f08-47a7-b086-c181efed1f1e", 01:25:30.795 "assigned_rate_limits": { 01:25:30.795 "rw_ios_per_sec": 0, 01:25:30.795 "rw_mbytes_per_sec": 0, 01:25:30.795 "r_mbytes_per_sec": 0, 01:25:30.795 "w_mbytes_per_sec": 0 01:25:30.795 }, 01:25:30.795 "claimed": true, 01:25:30.795 "claim_type": "exclusive_write", 01:25:30.795 "zoned": false, 01:25:30.795 "supported_io_types": { 01:25:30.795 "read": true, 01:25:30.795 "write": true, 01:25:30.795 "unmap": true, 01:25:30.795 "flush": true, 01:25:30.795 "reset": true, 01:25:30.795 "nvme_admin": false, 01:25:30.795 "nvme_io": false, 01:25:30.795 "nvme_io_md": false, 01:25:30.795 "write_zeroes": true, 01:25:30.795 "zcopy": true, 01:25:30.795 "get_zone_info": false, 01:25:30.795 "zone_management": false, 01:25:30.795 "zone_append": false, 01:25:30.795 "compare": false, 01:25:30.795 "compare_and_write": false, 01:25:30.795 "abort": true, 01:25:30.795 "seek_hole": false, 01:25:30.795 "seek_data": false, 01:25:30.795 "copy": true, 01:25:30.795 "nvme_iov_md": false 01:25:30.795 }, 01:25:30.795 "memory_domains": [ 01:25:30.795 { 01:25:30.795 "dma_device_id": "system", 01:25:30.795 "dma_device_type": 1 01:25:30.795 }, 01:25:30.795 { 01:25:30.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:30.795 "dma_device_type": 2 01:25:30.795 } 01:25:30.795 ], 01:25:30.795 "driver_specific": {} 01:25:30.795 } 01:25:30.795 ] 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:30.795 "name": "Existed_Raid", 01:25:30.795 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:30.795 "strip_size_kb": 64, 01:25:30.795 "state": "online", 01:25:30.795 "raid_level": "concat", 01:25:30.795 "superblock": true, 01:25:30.795 "num_base_bdevs": 4, 01:25:30.795 "num_base_bdevs_discovered": 4, 01:25:30.795 "num_base_bdevs_operational": 4, 01:25:30.795 "base_bdevs_list": [ 01:25:30.795 { 01:25:30.795 "name": "BaseBdev1", 01:25:30.795 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:30.795 "is_configured": true, 01:25:30.795 "data_offset": 2048, 01:25:30.795 "data_size": 63488 01:25:30.795 }, 01:25:30.795 { 01:25:30.795 "name": "BaseBdev2", 01:25:30.795 "uuid": "8fc8685b-46ad-4327-86e4-3c095f64911e", 01:25:30.795 "is_configured": true, 01:25:30.795 "data_offset": 2048, 01:25:30.795 "data_size": 63488 01:25:30.795 }, 01:25:30.795 { 01:25:30.795 "name": "BaseBdev3", 01:25:30.795 "uuid": "2ef31736-ff24-4c96-af69-5fc67fcce591", 01:25:30.795 "is_configured": true, 01:25:30.795 "data_offset": 2048, 01:25:30.795 "data_size": 63488 01:25:30.795 }, 01:25:30.795 { 01:25:30.795 "name": "BaseBdev4", 01:25:30.795 "uuid": "b7bc468e-5f08-47a7-b086-c181efed1f1e", 01:25:30.795 "is_configured": true, 01:25:30.795 "data_offset": 2048, 01:25:30.795 "data_size": 63488 01:25:30.795 } 01:25:30.795 ] 01:25:30.795 }' 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:30.795 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:31.359 [2024-12-09 05:20:22.804667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.359 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:31.359 "name": "Existed_Raid", 01:25:31.359 "aliases": [ 01:25:31.359 "798bd1fa-947c-4689-987d-f45842470fa8" 01:25:31.359 ], 01:25:31.359 "product_name": "Raid Volume", 01:25:31.359 "block_size": 512, 01:25:31.359 "num_blocks": 253952, 01:25:31.359 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:31.359 "assigned_rate_limits": { 01:25:31.359 "rw_ios_per_sec": 0, 01:25:31.359 "rw_mbytes_per_sec": 0, 01:25:31.359 "r_mbytes_per_sec": 0, 01:25:31.359 "w_mbytes_per_sec": 0 01:25:31.359 }, 01:25:31.359 "claimed": false, 01:25:31.359 "zoned": false, 01:25:31.359 "supported_io_types": { 01:25:31.359 "read": true, 01:25:31.359 "write": true, 01:25:31.359 "unmap": true, 01:25:31.359 "flush": true, 01:25:31.359 "reset": true, 01:25:31.359 "nvme_admin": false, 01:25:31.359 "nvme_io": false, 01:25:31.359 "nvme_io_md": false, 01:25:31.359 "write_zeroes": true, 01:25:31.359 "zcopy": false, 01:25:31.359 "get_zone_info": false, 01:25:31.359 "zone_management": false, 01:25:31.359 "zone_append": false, 01:25:31.359 "compare": false, 01:25:31.359 "compare_and_write": false, 01:25:31.359 "abort": false, 01:25:31.359 "seek_hole": false, 01:25:31.359 "seek_data": false, 01:25:31.359 "copy": false, 01:25:31.359 "nvme_iov_md": false 01:25:31.359 }, 01:25:31.359 "memory_domains": [ 01:25:31.359 { 01:25:31.359 "dma_device_id": "system", 01:25:31.359 "dma_device_type": 1 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:31.359 "dma_device_type": 2 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "system", 01:25:31.359 "dma_device_type": 1 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:31.359 "dma_device_type": 2 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "system", 01:25:31.359 "dma_device_type": 1 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:31.359 "dma_device_type": 2 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "system", 01:25:31.359 "dma_device_type": 1 01:25:31.359 }, 01:25:31.359 { 01:25:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:31.359 "dma_device_type": 2 01:25:31.359 } 01:25:31.360 ], 01:25:31.360 "driver_specific": { 01:25:31.360 "raid": { 01:25:31.360 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:31.360 "strip_size_kb": 64, 01:25:31.360 "state": "online", 01:25:31.360 "raid_level": "concat", 01:25:31.360 "superblock": true, 01:25:31.360 "num_base_bdevs": 4, 01:25:31.360 "num_base_bdevs_discovered": 4, 01:25:31.360 "num_base_bdevs_operational": 4, 01:25:31.360 "base_bdevs_list": [ 01:25:31.360 { 01:25:31.360 "name": "BaseBdev1", 01:25:31.360 "uuid": "95681261-3dd0-49cf-bc4f-527084255cf8", 01:25:31.360 "is_configured": true, 01:25:31.360 "data_offset": 2048, 01:25:31.360 "data_size": 63488 01:25:31.360 }, 01:25:31.360 { 01:25:31.360 "name": "BaseBdev2", 01:25:31.360 "uuid": "8fc8685b-46ad-4327-86e4-3c095f64911e", 01:25:31.360 "is_configured": true, 01:25:31.360 "data_offset": 2048, 01:25:31.360 "data_size": 63488 01:25:31.360 }, 01:25:31.360 { 01:25:31.360 "name": "BaseBdev3", 01:25:31.360 "uuid": "2ef31736-ff24-4c96-af69-5fc67fcce591", 01:25:31.360 "is_configured": true, 01:25:31.360 "data_offset": 2048, 01:25:31.360 "data_size": 63488 01:25:31.360 }, 01:25:31.360 { 01:25:31.360 "name": "BaseBdev4", 01:25:31.360 "uuid": "b7bc468e-5f08-47a7-b086-c181efed1f1e", 01:25:31.360 "is_configured": true, 01:25:31.360 "data_offset": 2048, 01:25:31.360 "data_size": 63488 01:25:31.360 } 01:25:31.360 ] 01:25:31.360 } 01:25:31.360 } 01:25:31.360 }' 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:25:31.360 BaseBdev2 01:25:31.360 BaseBdev3 01:25:31.360 BaseBdev4' 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:31.360 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.617 05:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.617 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.617 [2024-12-09 05:20:23.168428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:25:31.617 [2024-12-09 05:20:23.168600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:31.617 [2024-12-09 05:20:23.168803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:31.875 "name": "Existed_Raid", 01:25:31.875 "uuid": "798bd1fa-947c-4689-987d-f45842470fa8", 01:25:31.875 "strip_size_kb": 64, 01:25:31.875 "state": "offline", 01:25:31.875 "raid_level": "concat", 01:25:31.875 "superblock": true, 01:25:31.875 "num_base_bdevs": 4, 01:25:31.875 "num_base_bdevs_discovered": 3, 01:25:31.875 "num_base_bdevs_operational": 3, 01:25:31.875 "base_bdevs_list": [ 01:25:31.875 { 01:25:31.875 "name": null, 01:25:31.875 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:31.875 "is_configured": false, 01:25:31.875 "data_offset": 0, 01:25:31.875 "data_size": 63488 01:25:31.875 }, 01:25:31.875 { 01:25:31.875 "name": "BaseBdev2", 01:25:31.875 "uuid": "8fc8685b-46ad-4327-86e4-3c095f64911e", 01:25:31.875 "is_configured": true, 01:25:31.875 "data_offset": 2048, 01:25:31.875 "data_size": 63488 01:25:31.875 }, 01:25:31.875 { 01:25:31.875 "name": "BaseBdev3", 01:25:31.875 "uuid": "2ef31736-ff24-4c96-af69-5fc67fcce591", 01:25:31.875 "is_configured": true, 01:25:31.875 "data_offset": 2048, 01:25:31.875 "data_size": 63488 01:25:31.875 }, 01:25:31.875 { 01:25:31.875 "name": "BaseBdev4", 01:25:31.875 "uuid": "b7bc468e-5f08-47a7-b086-c181efed1f1e", 01:25:31.875 "is_configured": true, 01:25:31.875 "data_offset": 2048, 01:25:31.875 "data_size": 63488 01:25:31.875 } 01:25:31.875 ] 01:25:31.875 }' 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:31.875 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.441 [2024-12-09 05:20:23.849213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.441 05:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.441 [2024-12-09 05:20:23.995653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.699 [2024-12-09 05:20:24.138917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:25:32.699 [2024-12-09 05:20:24.139141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.699 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 BaseBdev2 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 [ 01:25:32.960 { 01:25:32.960 "name": "BaseBdev2", 01:25:32.960 "aliases": [ 01:25:32.960 "61e8ef32-9dcf-4ca2-8538-4c02965d5018" 01:25:32.960 ], 01:25:32.960 "product_name": "Malloc disk", 01:25:32.960 "block_size": 512, 01:25:32.960 "num_blocks": 65536, 01:25:32.960 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:32.960 "assigned_rate_limits": { 01:25:32.960 "rw_ios_per_sec": 0, 01:25:32.960 "rw_mbytes_per_sec": 0, 01:25:32.960 "r_mbytes_per_sec": 0, 01:25:32.960 "w_mbytes_per_sec": 0 01:25:32.960 }, 01:25:32.960 "claimed": false, 01:25:32.960 "zoned": false, 01:25:32.960 "supported_io_types": { 01:25:32.960 "read": true, 01:25:32.960 "write": true, 01:25:32.960 "unmap": true, 01:25:32.960 "flush": true, 01:25:32.960 "reset": true, 01:25:32.960 "nvme_admin": false, 01:25:32.960 "nvme_io": false, 01:25:32.960 "nvme_io_md": false, 01:25:32.960 "write_zeroes": true, 01:25:32.960 "zcopy": true, 01:25:32.960 "get_zone_info": false, 01:25:32.960 "zone_management": false, 01:25:32.960 "zone_append": false, 01:25:32.960 "compare": false, 01:25:32.960 "compare_and_write": false, 01:25:32.960 "abort": true, 01:25:32.960 "seek_hole": false, 01:25:32.960 "seek_data": false, 01:25:32.960 "copy": true, 01:25:32.960 "nvme_iov_md": false 01:25:32.960 }, 01:25:32.960 "memory_domains": [ 01:25:32.960 { 01:25:32.960 "dma_device_id": "system", 01:25:32.960 "dma_device_type": 1 01:25:32.960 }, 01:25:32.960 { 01:25:32.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:32.960 "dma_device_type": 2 01:25:32.960 } 01:25:32.960 ], 01:25:32.960 "driver_specific": {} 01:25:32.960 } 01:25:32.960 ] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 BaseBdev3 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 [ 01:25:32.960 { 01:25:32.960 "name": "BaseBdev3", 01:25:32.960 "aliases": [ 01:25:32.960 "dd2e1f9f-2f6c-4827-ab68-668185768d20" 01:25:32.960 ], 01:25:32.960 "product_name": "Malloc disk", 01:25:32.960 "block_size": 512, 01:25:32.960 "num_blocks": 65536, 01:25:32.960 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:32.960 "assigned_rate_limits": { 01:25:32.960 "rw_ios_per_sec": 0, 01:25:32.960 "rw_mbytes_per_sec": 0, 01:25:32.960 "r_mbytes_per_sec": 0, 01:25:32.960 "w_mbytes_per_sec": 0 01:25:32.960 }, 01:25:32.960 "claimed": false, 01:25:32.960 "zoned": false, 01:25:32.960 "supported_io_types": { 01:25:32.960 "read": true, 01:25:32.960 "write": true, 01:25:32.960 "unmap": true, 01:25:32.960 "flush": true, 01:25:32.960 "reset": true, 01:25:32.960 "nvme_admin": false, 01:25:32.960 "nvme_io": false, 01:25:32.960 "nvme_io_md": false, 01:25:32.960 "write_zeroes": true, 01:25:32.960 "zcopy": true, 01:25:32.960 "get_zone_info": false, 01:25:32.960 "zone_management": false, 01:25:32.960 "zone_append": false, 01:25:32.960 "compare": false, 01:25:32.960 "compare_and_write": false, 01:25:32.960 "abort": true, 01:25:32.960 "seek_hole": false, 01:25:32.960 "seek_data": false, 01:25:32.960 "copy": true, 01:25:32.960 "nvme_iov_md": false 01:25:32.960 }, 01:25:32.960 "memory_domains": [ 01:25:32.960 { 01:25:32.960 "dma_device_id": "system", 01:25:32.960 "dma_device_type": 1 01:25:32.960 }, 01:25:32.960 { 01:25:32.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:32.960 "dma_device_type": 2 01:25:32.960 } 01:25:32.960 ], 01:25:32.960 "driver_specific": {} 01:25:32.960 } 01:25:32.960 ] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 BaseBdev4 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 [ 01:25:32.960 { 01:25:32.960 "name": "BaseBdev4", 01:25:32.960 "aliases": [ 01:25:32.960 "47bc76fa-9e06-427b-82cb-91f2de70eaf9" 01:25:32.960 ], 01:25:32.960 "product_name": "Malloc disk", 01:25:32.960 "block_size": 512, 01:25:32.960 "num_blocks": 65536, 01:25:32.960 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:32.960 "assigned_rate_limits": { 01:25:32.960 "rw_ios_per_sec": 0, 01:25:32.960 "rw_mbytes_per_sec": 0, 01:25:32.960 "r_mbytes_per_sec": 0, 01:25:32.960 "w_mbytes_per_sec": 0 01:25:32.960 }, 01:25:32.960 "claimed": false, 01:25:32.960 "zoned": false, 01:25:32.960 "supported_io_types": { 01:25:32.960 "read": true, 01:25:32.960 "write": true, 01:25:32.960 "unmap": true, 01:25:32.960 "flush": true, 01:25:32.960 "reset": true, 01:25:32.960 "nvme_admin": false, 01:25:32.960 "nvme_io": false, 01:25:32.960 "nvme_io_md": false, 01:25:32.960 "write_zeroes": true, 01:25:32.960 "zcopy": true, 01:25:32.960 "get_zone_info": false, 01:25:32.960 "zone_management": false, 01:25:32.960 "zone_append": false, 01:25:32.960 "compare": false, 01:25:32.960 "compare_and_write": false, 01:25:32.960 "abort": true, 01:25:32.960 "seek_hole": false, 01:25:32.960 "seek_data": false, 01:25:32.960 "copy": true, 01:25:32.960 "nvme_iov_md": false 01:25:32.960 }, 01:25:32.960 "memory_domains": [ 01:25:32.960 { 01:25:32.960 "dma_device_id": "system", 01:25:32.960 "dma_device_type": 1 01:25:32.960 }, 01:25:32.960 { 01:25:32.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:32.960 "dma_device_type": 2 01:25:32.960 } 01:25:32.960 ], 01:25:32.960 "driver_specific": {} 01:25:32.960 } 01:25:32.960 ] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 [2024-12-09 05:20:24.510343] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:32.960 [2024-12-09 05:20:24.510584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:32.960 [2024-12-09 05:20:24.510735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:32.960 [2024-12-09 05:20:24.513308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:32.960 [2024-12-09 05:20:24.513417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:32.960 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.961 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:32.961 "name": "Existed_Raid", 01:25:32.961 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:32.961 "strip_size_kb": 64, 01:25:32.961 "state": "configuring", 01:25:32.961 "raid_level": "concat", 01:25:32.961 "superblock": true, 01:25:32.961 "num_base_bdevs": 4, 01:25:32.961 "num_base_bdevs_discovered": 3, 01:25:32.961 "num_base_bdevs_operational": 4, 01:25:32.961 "base_bdevs_list": [ 01:25:32.961 { 01:25:32.961 "name": "BaseBdev1", 01:25:32.961 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:32.961 "is_configured": false, 01:25:32.961 "data_offset": 0, 01:25:32.961 "data_size": 0 01:25:32.961 }, 01:25:32.961 { 01:25:32.961 "name": "BaseBdev2", 01:25:32.961 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:32.961 "is_configured": true, 01:25:32.961 "data_offset": 2048, 01:25:32.961 "data_size": 63488 01:25:32.961 }, 01:25:32.961 { 01:25:32.961 "name": "BaseBdev3", 01:25:32.961 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:32.961 "is_configured": true, 01:25:32.961 "data_offset": 2048, 01:25:32.961 "data_size": 63488 01:25:32.961 }, 01:25:32.961 { 01:25:32.961 "name": "BaseBdev4", 01:25:32.961 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:32.961 "is_configured": true, 01:25:32.961 "data_offset": 2048, 01:25:32.961 "data_size": 63488 01:25:32.961 } 01:25:32.961 ] 01:25:32.961 }' 01:25:32.961 05:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:32.961 05:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:33.526 [2024-12-09 05:20:25.034529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:33.526 "name": "Existed_Raid", 01:25:33.526 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:33.526 "strip_size_kb": 64, 01:25:33.526 "state": "configuring", 01:25:33.526 "raid_level": "concat", 01:25:33.526 "superblock": true, 01:25:33.526 "num_base_bdevs": 4, 01:25:33.526 "num_base_bdevs_discovered": 2, 01:25:33.526 "num_base_bdevs_operational": 4, 01:25:33.526 "base_bdevs_list": [ 01:25:33.526 { 01:25:33.526 "name": "BaseBdev1", 01:25:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:33.526 "is_configured": false, 01:25:33.526 "data_offset": 0, 01:25:33.526 "data_size": 0 01:25:33.526 }, 01:25:33.526 { 01:25:33.526 "name": null, 01:25:33.526 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:33.526 "is_configured": false, 01:25:33.526 "data_offset": 0, 01:25:33.526 "data_size": 63488 01:25:33.526 }, 01:25:33.526 { 01:25:33.526 "name": "BaseBdev3", 01:25:33.526 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:33.526 "is_configured": true, 01:25:33.526 "data_offset": 2048, 01:25:33.526 "data_size": 63488 01:25:33.526 }, 01:25:33.526 { 01:25:33.526 "name": "BaseBdev4", 01:25:33.526 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:33.526 "is_configured": true, 01:25:33.526 "data_offset": 2048, 01:25:33.526 "data_size": 63488 01:25:33.526 } 01:25:33.526 ] 01:25:33.526 }' 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:33.526 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.091 [2024-12-09 05:20:25.640785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:34.091 BaseBdev1 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.091 [ 01:25:34.091 { 01:25:34.091 "name": "BaseBdev1", 01:25:34.091 "aliases": [ 01:25:34.091 "be044b05-2aec-41d8-b03d-827ffe46c117" 01:25:34.091 ], 01:25:34.091 "product_name": "Malloc disk", 01:25:34.091 "block_size": 512, 01:25:34.091 "num_blocks": 65536, 01:25:34.091 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:34.091 "assigned_rate_limits": { 01:25:34.091 "rw_ios_per_sec": 0, 01:25:34.091 "rw_mbytes_per_sec": 0, 01:25:34.091 "r_mbytes_per_sec": 0, 01:25:34.091 "w_mbytes_per_sec": 0 01:25:34.091 }, 01:25:34.091 "claimed": true, 01:25:34.091 "claim_type": "exclusive_write", 01:25:34.091 "zoned": false, 01:25:34.091 "supported_io_types": { 01:25:34.091 "read": true, 01:25:34.091 "write": true, 01:25:34.091 "unmap": true, 01:25:34.091 "flush": true, 01:25:34.091 "reset": true, 01:25:34.091 "nvme_admin": false, 01:25:34.091 "nvme_io": false, 01:25:34.091 "nvme_io_md": false, 01:25:34.091 "write_zeroes": true, 01:25:34.091 "zcopy": true, 01:25:34.091 "get_zone_info": false, 01:25:34.091 "zone_management": false, 01:25:34.091 "zone_append": false, 01:25:34.091 "compare": false, 01:25:34.091 "compare_and_write": false, 01:25:34.091 "abort": true, 01:25:34.091 "seek_hole": false, 01:25:34.091 "seek_data": false, 01:25:34.091 "copy": true, 01:25:34.091 "nvme_iov_md": false 01:25:34.091 }, 01:25:34.091 "memory_domains": [ 01:25:34.091 { 01:25:34.091 "dma_device_id": "system", 01:25:34.091 "dma_device_type": 1 01:25:34.091 }, 01:25:34.091 { 01:25:34.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:34.091 "dma_device_type": 2 01:25:34.091 } 01:25:34.091 ], 01:25:34.091 "driver_specific": {} 01:25:34.091 } 01:25:34.091 ] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.091 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.350 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:34.350 "name": "Existed_Raid", 01:25:34.350 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:34.350 "strip_size_kb": 64, 01:25:34.350 "state": "configuring", 01:25:34.350 "raid_level": "concat", 01:25:34.350 "superblock": true, 01:25:34.350 "num_base_bdevs": 4, 01:25:34.350 "num_base_bdevs_discovered": 3, 01:25:34.350 "num_base_bdevs_operational": 4, 01:25:34.350 "base_bdevs_list": [ 01:25:34.350 { 01:25:34.350 "name": "BaseBdev1", 01:25:34.350 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:34.350 "is_configured": true, 01:25:34.350 "data_offset": 2048, 01:25:34.350 "data_size": 63488 01:25:34.350 }, 01:25:34.350 { 01:25:34.350 "name": null, 01:25:34.350 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:34.350 "is_configured": false, 01:25:34.350 "data_offset": 0, 01:25:34.350 "data_size": 63488 01:25:34.350 }, 01:25:34.350 { 01:25:34.350 "name": "BaseBdev3", 01:25:34.350 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:34.350 "is_configured": true, 01:25:34.350 "data_offset": 2048, 01:25:34.350 "data_size": 63488 01:25:34.350 }, 01:25:34.350 { 01:25:34.350 "name": "BaseBdev4", 01:25:34.350 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:34.350 "is_configured": true, 01:25:34.350 "data_offset": 2048, 01:25:34.350 "data_size": 63488 01:25:34.350 } 01:25:34.350 ] 01:25:34.350 }' 01:25:34.350 05:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:34.350 05:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.607 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:25:34.607 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:34.607 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.607 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.607 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.864 [2024-12-09 05:20:26.257092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:34.864 "name": "Existed_Raid", 01:25:34.864 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:34.864 "strip_size_kb": 64, 01:25:34.864 "state": "configuring", 01:25:34.864 "raid_level": "concat", 01:25:34.864 "superblock": true, 01:25:34.864 "num_base_bdevs": 4, 01:25:34.864 "num_base_bdevs_discovered": 2, 01:25:34.864 "num_base_bdevs_operational": 4, 01:25:34.864 "base_bdevs_list": [ 01:25:34.864 { 01:25:34.864 "name": "BaseBdev1", 01:25:34.864 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:34.864 "is_configured": true, 01:25:34.864 "data_offset": 2048, 01:25:34.864 "data_size": 63488 01:25:34.864 }, 01:25:34.864 { 01:25:34.864 "name": null, 01:25:34.864 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:34.864 "is_configured": false, 01:25:34.864 "data_offset": 0, 01:25:34.864 "data_size": 63488 01:25:34.864 }, 01:25:34.864 { 01:25:34.864 "name": null, 01:25:34.864 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:34.864 "is_configured": false, 01:25:34.864 "data_offset": 0, 01:25:34.864 "data_size": 63488 01:25:34.864 }, 01:25:34.864 { 01:25:34.864 "name": "BaseBdev4", 01:25:34.864 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:34.864 "is_configured": true, 01:25:34.864 "data_offset": 2048, 01:25:34.864 "data_size": 63488 01:25:34.864 } 01:25:34.864 ] 01:25:34.864 }' 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:34.864 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:35.430 [2024-12-09 05:20:26.853229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:35.430 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:35.431 "name": "Existed_Raid", 01:25:35.431 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:35.431 "strip_size_kb": 64, 01:25:35.431 "state": "configuring", 01:25:35.431 "raid_level": "concat", 01:25:35.431 "superblock": true, 01:25:35.431 "num_base_bdevs": 4, 01:25:35.431 "num_base_bdevs_discovered": 3, 01:25:35.431 "num_base_bdevs_operational": 4, 01:25:35.431 "base_bdevs_list": [ 01:25:35.431 { 01:25:35.431 "name": "BaseBdev1", 01:25:35.431 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:35.431 "is_configured": true, 01:25:35.431 "data_offset": 2048, 01:25:35.431 "data_size": 63488 01:25:35.431 }, 01:25:35.431 { 01:25:35.431 "name": null, 01:25:35.431 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:35.431 "is_configured": false, 01:25:35.431 "data_offset": 0, 01:25:35.431 "data_size": 63488 01:25:35.431 }, 01:25:35.431 { 01:25:35.431 "name": "BaseBdev3", 01:25:35.431 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:35.431 "is_configured": true, 01:25:35.431 "data_offset": 2048, 01:25:35.431 "data_size": 63488 01:25:35.431 }, 01:25:35.431 { 01:25:35.431 "name": "BaseBdev4", 01:25:35.431 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:35.431 "is_configured": true, 01:25:35.431 "data_offset": 2048, 01:25:35.431 "data_size": 63488 01:25:35.431 } 01:25:35.431 ] 01:25:35.431 }' 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:35.431 05:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.022 [2024-12-09 05:20:27.445454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:36.022 "name": "Existed_Raid", 01:25:36.022 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:36.022 "strip_size_kb": 64, 01:25:36.022 "state": "configuring", 01:25:36.022 "raid_level": "concat", 01:25:36.022 "superblock": true, 01:25:36.022 "num_base_bdevs": 4, 01:25:36.022 "num_base_bdevs_discovered": 2, 01:25:36.022 "num_base_bdevs_operational": 4, 01:25:36.022 "base_bdevs_list": [ 01:25:36.022 { 01:25:36.022 "name": null, 01:25:36.022 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:36.022 "is_configured": false, 01:25:36.022 "data_offset": 0, 01:25:36.022 "data_size": 63488 01:25:36.022 }, 01:25:36.022 { 01:25:36.022 "name": null, 01:25:36.022 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:36.022 "is_configured": false, 01:25:36.022 "data_offset": 0, 01:25:36.022 "data_size": 63488 01:25:36.022 }, 01:25:36.022 { 01:25:36.022 "name": "BaseBdev3", 01:25:36.022 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:36.022 "is_configured": true, 01:25:36.022 "data_offset": 2048, 01:25:36.022 "data_size": 63488 01:25:36.022 }, 01:25:36.022 { 01:25:36.022 "name": "BaseBdev4", 01:25:36.022 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:36.022 "is_configured": true, 01:25:36.022 "data_offset": 2048, 01:25:36.022 "data_size": 63488 01:25:36.022 } 01:25:36.022 ] 01:25:36.022 }' 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:36.022 05:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.589 [2024-12-09 05:20:28.104954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:36.589 "name": "Existed_Raid", 01:25:36.589 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:36.589 "strip_size_kb": 64, 01:25:36.589 "state": "configuring", 01:25:36.589 "raid_level": "concat", 01:25:36.589 "superblock": true, 01:25:36.589 "num_base_bdevs": 4, 01:25:36.589 "num_base_bdevs_discovered": 3, 01:25:36.589 "num_base_bdevs_operational": 4, 01:25:36.589 "base_bdevs_list": [ 01:25:36.589 { 01:25:36.589 "name": null, 01:25:36.589 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:36.589 "is_configured": false, 01:25:36.589 "data_offset": 0, 01:25:36.589 "data_size": 63488 01:25:36.589 }, 01:25:36.589 { 01:25:36.589 "name": "BaseBdev2", 01:25:36.589 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:36.589 "is_configured": true, 01:25:36.589 "data_offset": 2048, 01:25:36.589 "data_size": 63488 01:25:36.589 }, 01:25:36.589 { 01:25:36.589 "name": "BaseBdev3", 01:25:36.589 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:36.589 "is_configured": true, 01:25:36.589 "data_offset": 2048, 01:25:36.589 "data_size": 63488 01:25:36.589 }, 01:25:36.589 { 01:25:36.589 "name": "BaseBdev4", 01:25:36.589 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:36.589 "is_configured": true, 01:25:36.589 "data_offset": 2048, 01:25:36.589 "data_size": 63488 01:25:36.589 } 01:25:36.589 ] 01:25:36.589 }' 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:36.589 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be044b05-2aec-41d8-b03d-827ffe46c117 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.155 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.413 [2024-12-09 05:20:28.793462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:25:37.413 [2024-12-09 05:20:28.794085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:25:37.413 [2024-12-09 05:20:28.794110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:37.413 [2024-12-09 05:20:28.794475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:25:37.413 NewBaseBdev 01:25:37.413 [2024-12-09 05:20:28.794668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:25:37.413 [2024-12-09 05:20:28.794690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:25:37.413 [2024-12-09 05:20:28.794877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.413 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.413 [ 01:25:37.413 { 01:25:37.413 "name": "NewBaseBdev", 01:25:37.413 "aliases": [ 01:25:37.413 "be044b05-2aec-41d8-b03d-827ffe46c117" 01:25:37.413 ], 01:25:37.413 "product_name": "Malloc disk", 01:25:37.413 "block_size": 512, 01:25:37.413 "num_blocks": 65536, 01:25:37.413 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:37.413 "assigned_rate_limits": { 01:25:37.413 "rw_ios_per_sec": 0, 01:25:37.413 "rw_mbytes_per_sec": 0, 01:25:37.413 "r_mbytes_per_sec": 0, 01:25:37.413 "w_mbytes_per_sec": 0 01:25:37.413 }, 01:25:37.413 "claimed": true, 01:25:37.413 "claim_type": "exclusive_write", 01:25:37.414 "zoned": false, 01:25:37.414 "supported_io_types": { 01:25:37.414 "read": true, 01:25:37.414 "write": true, 01:25:37.414 "unmap": true, 01:25:37.414 "flush": true, 01:25:37.414 "reset": true, 01:25:37.414 "nvme_admin": false, 01:25:37.414 "nvme_io": false, 01:25:37.414 "nvme_io_md": false, 01:25:37.414 "write_zeroes": true, 01:25:37.414 "zcopy": true, 01:25:37.414 "get_zone_info": false, 01:25:37.414 "zone_management": false, 01:25:37.414 "zone_append": false, 01:25:37.414 "compare": false, 01:25:37.414 "compare_and_write": false, 01:25:37.414 "abort": true, 01:25:37.414 "seek_hole": false, 01:25:37.414 "seek_data": false, 01:25:37.414 "copy": true, 01:25:37.414 "nvme_iov_md": false 01:25:37.414 }, 01:25:37.414 "memory_domains": [ 01:25:37.414 { 01:25:37.414 "dma_device_id": "system", 01:25:37.414 "dma_device_type": 1 01:25:37.414 }, 01:25:37.414 { 01:25:37.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:37.414 "dma_device_type": 2 01:25:37.414 } 01:25:37.414 ], 01:25:37.414 "driver_specific": {} 01:25:37.414 } 01:25:37.414 ] 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:37.414 "name": "Existed_Raid", 01:25:37.414 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:37.414 "strip_size_kb": 64, 01:25:37.414 "state": "online", 01:25:37.414 "raid_level": "concat", 01:25:37.414 "superblock": true, 01:25:37.414 "num_base_bdevs": 4, 01:25:37.414 "num_base_bdevs_discovered": 4, 01:25:37.414 "num_base_bdevs_operational": 4, 01:25:37.414 "base_bdevs_list": [ 01:25:37.414 { 01:25:37.414 "name": "NewBaseBdev", 01:25:37.414 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:37.414 "is_configured": true, 01:25:37.414 "data_offset": 2048, 01:25:37.414 "data_size": 63488 01:25:37.414 }, 01:25:37.414 { 01:25:37.414 "name": "BaseBdev2", 01:25:37.414 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:37.414 "is_configured": true, 01:25:37.414 "data_offset": 2048, 01:25:37.414 "data_size": 63488 01:25:37.414 }, 01:25:37.414 { 01:25:37.414 "name": "BaseBdev3", 01:25:37.414 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:37.414 "is_configured": true, 01:25:37.414 "data_offset": 2048, 01:25:37.414 "data_size": 63488 01:25:37.414 }, 01:25:37.414 { 01:25:37.414 "name": "BaseBdev4", 01:25:37.414 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:37.414 "is_configured": true, 01:25:37.414 "data_offset": 2048, 01:25:37.414 "data_size": 63488 01:25:37.414 } 01:25:37.414 ] 01:25:37.414 }' 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:37.414 05:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.980 [2024-12-09 05:20:29.370153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:37.980 "name": "Existed_Raid", 01:25:37.980 "aliases": [ 01:25:37.980 "3109fa97-5b40-4c10-bb1d-a178fcd0cf98" 01:25:37.980 ], 01:25:37.980 "product_name": "Raid Volume", 01:25:37.980 "block_size": 512, 01:25:37.980 "num_blocks": 253952, 01:25:37.980 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:37.980 "assigned_rate_limits": { 01:25:37.980 "rw_ios_per_sec": 0, 01:25:37.980 "rw_mbytes_per_sec": 0, 01:25:37.980 "r_mbytes_per_sec": 0, 01:25:37.980 "w_mbytes_per_sec": 0 01:25:37.980 }, 01:25:37.980 "claimed": false, 01:25:37.980 "zoned": false, 01:25:37.980 "supported_io_types": { 01:25:37.980 "read": true, 01:25:37.980 "write": true, 01:25:37.980 "unmap": true, 01:25:37.980 "flush": true, 01:25:37.980 "reset": true, 01:25:37.980 "nvme_admin": false, 01:25:37.980 "nvme_io": false, 01:25:37.980 "nvme_io_md": false, 01:25:37.980 "write_zeroes": true, 01:25:37.980 "zcopy": false, 01:25:37.980 "get_zone_info": false, 01:25:37.980 "zone_management": false, 01:25:37.980 "zone_append": false, 01:25:37.980 "compare": false, 01:25:37.980 "compare_and_write": false, 01:25:37.980 "abort": false, 01:25:37.980 "seek_hole": false, 01:25:37.980 "seek_data": false, 01:25:37.980 "copy": false, 01:25:37.980 "nvme_iov_md": false 01:25:37.980 }, 01:25:37.980 "memory_domains": [ 01:25:37.980 { 01:25:37.980 "dma_device_id": "system", 01:25:37.980 "dma_device_type": 1 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:37.980 "dma_device_type": 2 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "system", 01:25:37.980 "dma_device_type": 1 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:37.980 "dma_device_type": 2 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "system", 01:25:37.980 "dma_device_type": 1 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:37.980 "dma_device_type": 2 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "system", 01:25:37.980 "dma_device_type": 1 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:37.980 "dma_device_type": 2 01:25:37.980 } 01:25:37.980 ], 01:25:37.980 "driver_specific": { 01:25:37.980 "raid": { 01:25:37.980 "uuid": "3109fa97-5b40-4c10-bb1d-a178fcd0cf98", 01:25:37.980 "strip_size_kb": 64, 01:25:37.980 "state": "online", 01:25:37.980 "raid_level": "concat", 01:25:37.980 "superblock": true, 01:25:37.980 "num_base_bdevs": 4, 01:25:37.980 "num_base_bdevs_discovered": 4, 01:25:37.980 "num_base_bdevs_operational": 4, 01:25:37.980 "base_bdevs_list": [ 01:25:37.980 { 01:25:37.980 "name": "NewBaseBdev", 01:25:37.980 "uuid": "be044b05-2aec-41d8-b03d-827ffe46c117", 01:25:37.980 "is_configured": true, 01:25:37.980 "data_offset": 2048, 01:25:37.980 "data_size": 63488 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "name": "BaseBdev2", 01:25:37.980 "uuid": "61e8ef32-9dcf-4ca2-8538-4c02965d5018", 01:25:37.980 "is_configured": true, 01:25:37.980 "data_offset": 2048, 01:25:37.980 "data_size": 63488 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "name": "BaseBdev3", 01:25:37.980 "uuid": "dd2e1f9f-2f6c-4827-ab68-668185768d20", 01:25:37.980 "is_configured": true, 01:25:37.980 "data_offset": 2048, 01:25:37.980 "data_size": 63488 01:25:37.980 }, 01:25:37.980 { 01:25:37.980 "name": "BaseBdev4", 01:25:37.980 "uuid": "47bc76fa-9e06-427b-82cb-91f2de70eaf9", 01:25:37.980 "is_configured": true, 01:25:37.980 "data_offset": 2048, 01:25:37.980 "data_size": 63488 01:25:37.980 } 01:25:37.980 ] 01:25:37.980 } 01:25:37.980 } 01:25:37.980 }' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:25:37.980 BaseBdev2 01:25:37.980 BaseBdev3 01:25:37.980 BaseBdev4' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:37.980 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:38.239 [2024-12-09 05:20:29.697789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:38.239 [2024-12-09 05:20:29.697846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:38.239 [2024-12-09 05:20:29.697951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:38.239 [2024-12-09 05:20:29.698055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:38.239 [2024-12-09 05:20:29.698073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72012 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72012 ']' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72012 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72012 01:25:38.239 killing process with pid 72012 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72012' 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72012 01:25:38.239 [2024-12-09 05:20:29.735682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:38.239 05:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72012 01:25:38.497 [2024-12-09 05:20:30.097375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:39.868 05:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:25:39.868 01:25:39.868 real 0m13.012s 01:25:39.868 user 0m21.463s 01:25:39.868 sys 0m1.811s 01:25:39.868 05:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:39.868 ************************************ 01:25:39.868 END TEST raid_state_function_test_sb 01:25:39.868 ************************************ 01:25:39.868 05:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:25:39.868 05:20:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 01:25:39.868 05:20:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:25:39.868 05:20:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:39.868 05:20:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:39.868 ************************************ 01:25:39.868 START TEST raid_superblock_test 01:25:39.868 ************************************ 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:25:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72689 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72689 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72689 ']' 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:39.868 05:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:39.868 [2024-12-09 05:20:31.459421] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:39.868 [2024-12-09 05:20:31.459874] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 01:25:40.126 [2024-12-09 05:20:31.646295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:40.383 [2024-12-09 05:20:31.778869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:40.383 [2024-12-09 05:20:31.980471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:40.383 [2024-12-09 05:20:31.980545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:40.949 malloc1 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:40.949 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:40.949 [2024-12-09 05:20:32.462622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:25:40.949 [2024-12-09 05:20:32.462706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:40.950 [2024-12-09 05:20:32.462740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:25:40.950 [2024-12-09 05:20:32.462757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:40.950 [2024-12-09 05:20:32.465670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:40.950 [2024-12-09 05:20:32.465861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:25:40.950 pt1 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:40.950 malloc2 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:40.950 [2024-12-09 05:20:32.518901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:25:40.950 [2024-12-09 05:20:32.519190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:40.950 [2024-12-09 05:20:32.519416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:25:40.950 [2024-12-09 05:20:32.519564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:40.950 [2024-12-09 05:20:32.522536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:40.950 [2024-12-09 05:20:32.522704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:25:40.950 pt2 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:40.950 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.208 malloc3 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.208 [2024-12-09 05:20:32.591721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:25:41.208 [2024-12-09 05:20:32.592112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:41.208 [2024-12-09 05:20:32.592161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:25:41.208 [2024-12-09 05:20:32.592180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:41.208 [2024-12-09 05:20:32.595125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:41.208 [2024-12-09 05:20:32.595304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:25:41.208 pt3 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.208 malloc4 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.208 [2024-12-09 05:20:32.647776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:25:41.208 [2024-12-09 05:20:32.647890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:41.208 [2024-12-09 05:20:32.647926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:25:41.208 [2024-12-09 05:20:32.647941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:41.208 [2024-12-09 05:20:32.650834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:41.208 [2024-12-09 05:20:32.650879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:25:41.208 pt4 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.208 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.209 [2024-12-09 05:20:32.659835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:25:41.209 [2024-12-09 05:20:32.662301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:25:41.209 [2024-12-09 05:20:32.662461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:25:41.209 [2024-12-09 05:20:32.662539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:25:41.209 [2024-12-09 05:20:32.662802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:25:41.209 [2024-12-09 05:20:32.662821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:41.209 [2024-12-09 05:20:32.663144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:25:41.209 [2024-12-09 05:20:32.663362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:25:41.209 [2024-12-09 05:20:32.663435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:25:41.209 [2024-12-09 05:20:32.663624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:41.209 "name": "raid_bdev1", 01:25:41.209 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:41.209 "strip_size_kb": 64, 01:25:41.209 "state": "online", 01:25:41.209 "raid_level": "concat", 01:25:41.209 "superblock": true, 01:25:41.209 "num_base_bdevs": 4, 01:25:41.209 "num_base_bdevs_discovered": 4, 01:25:41.209 "num_base_bdevs_operational": 4, 01:25:41.209 "base_bdevs_list": [ 01:25:41.209 { 01:25:41.209 "name": "pt1", 01:25:41.209 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:41.209 "is_configured": true, 01:25:41.209 "data_offset": 2048, 01:25:41.209 "data_size": 63488 01:25:41.209 }, 01:25:41.209 { 01:25:41.209 "name": "pt2", 01:25:41.209 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:41.209 "is_configured": true, 01:25:41.209 "data_offset": 2048, 01:25:41.209 "data_size": 63488 01:25:41.209 }, 01:25:41.209 { 01:25:41.209 "name": "pt3", 01:25:41.209 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:41.209 "is_configured": true, 01:25:41.209 "data_offset": 2048, 01:25:41.209 "data_size": 63488 01:25:41.209 }, 01:25:41.209 { 01:25:41.209 "name": "pt4", 01:25:41.209 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:41.209 "is_configured": true, 01:25:41.209 "data_offset": 2048, 01:25:41.209 "data_size": 63488 01:25:41.209 } 01:25:41.209 ] 01:25:41.209 }' 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:41.209 05:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.775 [2024-12-09 05:20:33.172510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.775 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:41.775 "name": "raid_bdev1", 01:25:41.775 "aliases": [ 01:25:41.775 "462b9955-9353-408c-8cc5-c7246328dc32" 01:25:41.775 ], 01:25:41.775 "product_name": "Raid Volume", 01:25:41.775 "block_size": 512, 01:25:41.775 "num_blocks": 253952, 01:25:41.775 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:41.775 "assigned_rate_limits": { 01:25:41.775 "rw_ios_per_sec": 0, 01:25:41.775 "rw_mbytes_per_sec": 0, 01:25:41.775 "r_mbytes_per_sec": 0, 01:25:41.775 "w_mbytes_per_sec": 0 01:25:41.775 }, 01:25:41.775 "claimed": false, 01:25:41.775 "zoned": false, 01:25:41.775 "supported_io_types": { 01:25:41.775 "read": true, 01:25:41.775 "write": true, 01:25:41.775 "unmap": true, 01:25:41.775 "flush": true, 01:25:41.775 "reset": true, 01:25:41.775 "nvme_admin": false, 01:25:41.775 "nvme_io": false, 01:25:41.775 "nvme_io_md": false, 01:25:41.775 "write_zeroes": true, 01:25:41.775 "zcopy": false, 01:25:41.775 "get_zone_info": false, 01:25:41.775 "zone_management": false, 01:25:41.775 "zone_append": false, 01:25:41.775 "compare": false, 01:25:41.775 "compare_and_write": false, 01:25:41.775 "abort": false, 01:25:41.775 "seek_hole": false, 01:25:41.775 "seek_data": false, 01:25:41.775 "copy": false, 01:25:41.775 "nvme_iov_md": false 01:25:41.775 }, 01:25:41.775 "memory_domains": [ 01:25:41.775 { 01:25:41.775 "dma_device_id": "system", 01:25:41.775 "dma_device_type": 1 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:41.775 "dma_device_type": 2 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "system", 01:25:41.775 "dma_device_type": 1 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:41.775 "dma_device_type": 2 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "system", 01:25:41.775 "dma_device_type": 1 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:41.775 "dma_device_type": 2 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "system", 01:25:41.775 "dma_device_type": 1 01:25:41.775 }, 01:25:41.775 { 01:25:41.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:41.775 "dma_device_type": 2 01:25:41.775 } 01:25:41.775 ], 01:25:41.775 "driver_specific": { 01:25:41.775 "raid": { 01:25:41.775 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:41.775 "strip_size_kb": 64, 01:25:41.775 "state": "online", 01:25:41.775 "raid_level": "concat", 01:25:41.775 "superblock": true, 01:25:41.776 "num_base_bdevs": 4, 01:25:41.776 "num_base_bdevs_discovered": 4, 01:25:41.776 "num_base_bdevs_operational": 4, 01:25:41.776 "base_bdevs_list": [ 01:25:41.776 { 01:25:41.776 "name": "pt1", 01:25:41.776 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:41.776 "is_configured": true, 01:25:41.776 "data_offset": 2048, 01:25:41.776 "data_size": 63488 01:25:41.776 }, 01:25:41.776 { 01:25:41.776 "name": "pt2", 01:25:41.776 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:41.776 "is_configured": true, 01:25:41.776 "data_offset": 2048, 01:25:41.776 "data_size": 63488 01:25:41.776 }, 01:25:41.776 { 01:25:41.776 "name": "pt3", 01:25:41.776 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:41.776 "is_configured": true, 01:25:41.776 "data_offset": 2048, 01:25:41.776 "data_size": 63488 01:25:41.776 }, 01:25:41.776 { 01:25:41.776 "name": "pt4", 01:25:41.776 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:41.776 "is_configured": true, 01:25:41.776 "data_offset": 2048, 01:25:41.776 "data_size": 63488 01:25:41.776 } 01:25:41.776 ] 01:25:41.776 } 01:25:41.776 } 01:25:41.776 }' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:25:41.776 pt2 01:25:41.776 pt3 01:25:41.776 pt4' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:41.776 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.034 [2024-12-09 05:20:33.528578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=462b9955-9353-408c-8cc5-c7246328dc32 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 462b9955-9353-408c-8cc5-c7246328dc32 ']' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.034 [2024-12-09 05:20:33.576164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:42.034 [2024-12-09 05:20:33.576418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:42.034 [2024-12-09 05:20:33.576563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:42.034 [2024-12-09 05:20:33.576662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:42.034 [2024-12-09 05:20:33.576686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.034 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 [2024-12-09 05:20:33.732256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:25:42.293 [2024-12-09 05:20:33.735120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:25:42.293 [2024-12-09 05:20:33.735193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:25:42.293 [2024-12-09 05:20:33.735251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 01:25:42.293 [2024-12-09 05:20:33.735331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:25:42.293 [2024-12-09 05:20:33.735438] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:25:42.293 [2024-12-09 05:20:33.735482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:25:42.293 [2024-12-09 05:20:33.735515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 01:25:42.293 [2024-12-09 05:20:33.735538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:42.293 [2024-12-09 05:20:33.735555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:25:42.293 request: 01:25:42.293 { 01:25:42.293 "name": "raid_bdev1", 01:25:42.293 "raid_level": "concat", 01:25:42.293 "base_bdevs": [ 01:25:42.293 "malloc1", 01:25:42.293 "malloc2", 01:25:42.293 "malloc3", 01:25:42.293 "malloc4" 01:25:42.293 ], 01:25:42.293 "strip_size_kb": 64, 01:25:42.293 "superblock": false, 01:25:42.293 "method": "bdev_raid_create", 01:25:42.293 "req_id": 1 01:25:42.293 } 01:25:42.293 Got JSON-RPC error response 01:25:42.293 response: 01:25:42.293 { 01:25:42.293 "code": -17, 01:25:42.293 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:25:42.293 } 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.293 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.293 [2024-12-09 05:20:33.800308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:25:42.293 [2024-12-09 05:20:33.800644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:42.293 [2024-12-09 05:20:33.800688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:25:42.293 [2024-12-09 05:20:33.800716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:42.293 [2024-12-09 05:20:33.803723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:42.293 [2024-12-09 05:20:33.803776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:25:42.293 [2024-12-09 05:20:33.803888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:25:42.294 [2024-12-09 05:20:33.803968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:25:42.294 pt1 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:42.294 "name": "raid_bdev1", 01:25:42.294 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:42.294 "strip_size_kb": 64, 01:25:42.294 "state": "configuring", 01:25:42.294 "raid_level": "concat", 01:25:42.294 "superblock": true, 01:25:42.294 "num_base_bdevs": 4, 01:25:42.294 "num_base_bdevs_discovered": 1, 01:25:42.294 "num_base_bdevs_operational": 4, 01:25:42.294 "base_bdevs_list": [ 01:25:42.294 { 01:25:42.294 "name": "pt1", 01:25:42.294 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:42.294 "is_configured": true, 01:25:42.294 "data_offset": 2048, 01:25:42.294 "data_size": 63488 01:25:42.294 }, 01:25:42.294 { 01:25:42.294 "name": null, 01:25:42.294 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:42.294 "is_configured": false, 01:25:42.294 "data_offset": 2048, 01:25:42.294 "data_size": 63488 01:25:42.294 }, 01:25:42.294 { 01:25:42.294 "name": null, 01:25:42.294 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:42.294 "is_configured": false, 01:25:42.294 "data_offset": 2048, 01:25:42.294 "data_size": 63488 01:25:42.294 }, 01:25:42.294 { 01:25:42.294 "name": null, 01:25:42.294 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:42.294 "is_configured": false, 01:25:42.294 "data_offset": 2048, 01:25:42.294 "data_size": 63488 01:25:42.294 } 01:25:42.294 ] 01:25:42.294 }' 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:42.294 05:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.882 [2024-12-09 05:20:34.340564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:25:42.882 [2024-12-09 05:20:34.340683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:42.882 [2024-12-09 05:20:34.340721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:25:42.882 [2024-12-09 05:20:34.340757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:42.882 [2024-12-09 05:20:34.341468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:42.882 [2024-12-09 05:20:34.341530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:25:42.882 [2024-12-09 05:20:34.341659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:25:42.882 [2024-12-09 05:20:34.341705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:25:42.882 pt2 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.882 [2024-12-09 05:20:34.348488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:42.882 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:42.882 "name": "raid_bdev1", 01:25:42.882 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:42.882 "strip_size_kb": 64, 01:25:42.882 "state": "configuring", 01:25:42.882 "raid_level": "concat", 01:25:42.882 "superblock": true, 01:25:42.882 "num_base_bdevs": 4, 01:25:42.882 "num_base_bdevs_discovered": 1, 01:25:42.882 "num_base_bdevs_operational": 4, 01:25:42.882 "base_bdevs_list": [ 01:25:42.882 { 01:25:42.882 "name": "pt1", 01:25:42.882 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:42.882 "is_configured": true, 01:25:42.882 "data_offset": 2048, 01:25:42.882 "data_size": 63488 01:25:42.882 }, 01:25:42.882 { 01:25:42.882 "name": null, 01:25:42.882 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:42.882 "is_configured": false, 01:25:42.882 "data_offset": 0, 01:25:42.882 "data_size": 63488 01:25:42.882 }, 01:25:42.882 { 01:25:42.882 "name": null, 01:25:42.882 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:42.882 "is_configured": false, 01:25:42.882 "data_offset": 2048, 01:25:42.882 "data_size": 63488 01:25:42.882 }, 01:25:42.883 { 01:25:42.883 "name": null, 01:25:42.883 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:42.883 "is_configured": false, 01:25:42.883 "data_offset": 2048, 01:25:42.883 "data_size": 63488 01:25:42.883 } 01:25:42.883 ] 01:25:42.883 }' 01:25:42.883 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:42.883 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:43.475 [2024-12-09 05:20:34.880751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:25:43.475 [2024-12-09 05:20:34.880868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:43.475 [2024-12-09 05:20:34.880904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:25:43.475 [2024-12-09 05:20:34.880920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:43.475 [2024-12-09 05:20:34.881654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:43.475 [2024-12-09 05:20:34.881681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:25:43.475 [2024-12-09 05:20:34.881804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:25:43.475 [2024-12-09 05:20:34.881849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:25:43.475 pt2 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:43.475 [2024-12-09 05:20:34.892660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:25:43.475 [2024-12-09 05:20:34.892918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:43.475 [2024-12-09 05:20:34.892995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:25:43.475 [2024-12-09 05:20:34.893205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:43.475 [2024-12-09 05:20:34.893914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:43.475 [2024-12-09 05:20:34.894103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:25:43.475 [2024-12-09 05:20:34.894324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:25:43.475 [2024-12-09 05:20:34.894510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:25:43.475 pt3 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:43.475 [2024-12-09 05:20:34.904641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:25:43.475 [2024-12-09 05:20:34.904721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:43.475 [2024-12-09 05:20:34.904771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:25:43.475 [2024-12-09 05:20:34.904787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:43.475 [2024-12-09 05:20:34.905507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:43.475 [2024-12-09 05:20:34.905549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:25:43.475 [2024-12-09 05:20:34.905662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:25:43.475 [2024-12-09 05:20:34.905705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:25:43.475 [2024-12-09 05:20:34.905907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:25:43.475 [2024-12-09 05:20:34.905924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:43.475 [2024-12-09 05:20:34.906249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:25:43.475 [2024-12-09 05:20:34.906487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:25:43.475 [2024-12-09 05:20:34.906513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:25:43.475 [2024-12-09 05:20:34.906685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:43.475 pt4 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:43.475 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:43.475 "name": "raid_bdev1", 01:25:43.476 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:43.476 "strip_size_kb": 64, 01:25:43.476 "state": "online", 01:25:43.476 "raid_level": "concat", 01:25:43.476 "superblock": true, 01:25:43.476 "num_base_bdevs": 4, 01:25:43.476 "num_base_bdevs_discovered": 4, 01:25:43.476 "num_base_bdevs_operational": 4, 01:25:43.476 "base_bdevs_list": [ 01:25:43.476 { 01:25:43.476 "name": "pt1", 01:25:43.476 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:43.476 "is_configured": true, 01:25:43.476 "data_offset": 2048, 01:25:43.476 "data_size": 63488 01:25:43.476 }, 01:25:43.476 { 01:25:43.476 "name": "pt2", 01:25:43.476 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:43.476 "is_configured": true, 01:25:43.476 "data_offset": 2048, 01:25:43.476 "data_size": 63488 01:25:43.476 }, 01:25:43.476 { 01:25:43.476 "name": "pt3", 01:25:43.476 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:43.476 "is_configured": true, 01:25:43.476 "data_offset": 2048, 01:25:43.476 "data_size": 63488 01:25:43.476 }, 01:25:43.476 { 01:25:43.476 "name": "pt4", 01:25:43.476 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:43.476 "is_configured": true, 01:25:43.476 "data_offset": 2048, 01:25:43.476 "data_size": 63488 01:25:43.476 } 01:25:43.476 ] 01:25:43.476 }' 01:25:43.476 05:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:43.476 05:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.040 [2024-12-09 05:20:35.461350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:44.040 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:25:44.040 "name": "raid_bdev1", 01:25:44.040 "aliases": [ 01:25:44.040 "462b9955-9353-408c-8cc5-c7246328dc32" 01:25:44.040 ], 01:25:44.040 "product_name": "Raid Volume", 01:25:44.040 "block_size": 512, 01:25:44.040 "num_blocks": 253952, 01:25:44.040 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:44.040 "assigned_rate_limits": { 01:25:44.040 "rw_ios_per_sec": 0, 01:25:44.040 "rw_mbytes_per_sec": 0, 01:25:44.040 "r_mbytes_per_sec": 0, 01:25:44.040 "w_mbytes_per_sec": 0 01:25:44.040 }, 01:25:44.040 "claimed": false, 01:25:44.040 "zoned": false, 01:25:44.040 "supported_io_types": { 01:25:44.040 "read": true, 01:25:44.040 "write": true, 01:25:44.040 "unmap": true, 01:25:44.040 "flush": true, 01:25:44.041 "reset": true, 01:25:44.041 "nvme_admin": false, 01:25:44.041 "nvme_io": false, 01:25:44.041 "nvme_io_md": false, 01:25:44.041 "write_zeroes": true, 01:25:44.041 "zcopy": false, 01:25:44.041 "get_zone_info": false, 01:25:44.041 "zone_management": false, 01:25:44.041 "zone_append": false, 01:25:44.041 "compare": false, 01:25:44.041 "compare_and_write": false, 01:25:44.041 "abort": false, 01:25:44.041 "seek_hole": false, 01:25:44.041 "seek_data": false, 01:25:44.041 "copy": false, 01:25:44.041 "nvme_iov_md": false 01:25:44.041 }, 01:25:44.041 "memory_domains": [ 01:25:44.041 { 01:25:44.041 "dma_device_id": "system", 01:25:44.041 "dma_device_type": 1 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:44.041 "dma_device_type": 2 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "system", 01:25:44.041 "dma_device_type": 1 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:44.041 "dma_device_type": 2 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "system", 01:25:44.041 "dma_device_type": 1 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:44.041 "dma_device_type": 2 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "system", 01:25:44.041 "dma_device_type": 1 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:44.041 "dma_device_type": 2 01:25:44.041 } 01:25:44.041 ], 01:25:44.041 "driver_specific": { 01:25:44.041 "raid": { 01:25:44.041 "uuid": "462b9955-9353-408c-8cc5-c7246328dc32", 01:25:44.041 "strip_size_kb": 64, 01:25:44.041 "state": "online", 01:25:44.041 "raid_level": "concat", 01:25:44.041 "superblock": true, 01:25:44.041 "num_base_bdevs": 4, 01:25:44.041 "num_base_bdevs_discovered": 4, 01:25:44.041 "num_base_bdevs_operational": 4, 01:25:44.041 "base_bdevs_list": [ 01:25:44.041 { 01:25:44.041 "name": "pt1", 01:25:44.041 "uuid": "00000000-0000-0000-0000-000000000001", 01:25:44.041 "is_configured": true, 01:25:44.041 "data_offset": 2048, 01:25:44.041 "data_size": 63488 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "name": "pt2", 01:25:44.041 "uuid": "00000000-0000-0000-0000-000000000002", 01:25:44.041 "is_configured": true, 01:25:44.041 "data_offset": 2048, 01:25:44.041 "data_size": 63488 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "name": "pt3", 01:25:44.041 "uuid": "00000000-0000-0000-0000-000000000003", 01:25:44.041 "is_configured": true, 01:25:44.041 "data_offset": 2048, 01:25:44.041 "data_size": 63488 01:25:44.041 }, 01:25:44.041 { 01:25:44.041 "name": "pt4", 01:25:44.041 "uuid": "00000000-0000-0000-0000-000000000004", 01:25:44.041 "is_configured": true, 01:25:44.041 "data_offset": 2048, 01:25:44.041 "data_size": 63488 01:25:44.041 } 01:25:44.041 ] 01:25:44.041 } 01:25:44.041 } 01:25:44.041 }' 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:25:44.041 pt2 01:25:44.041 pt3 01:25:44.041 pt4' 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.041 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:44.299 [2024-12-09 05:20:35.845431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 462b9955-9353-408c-8cc5-c7246328dc32 '!=' 462b9955-9353-408c-8cc5-c7246328dc32 ']' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72689 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72689 ']' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72689 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:44.299 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72689 01:25:44.557 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:44.557 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:44.557 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72689' 01:25:44.557 killing process with pid 72689 01:25:44.557 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72689 01:25:44.557 [2024-12-09 05:20:35.931812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:44.557 05:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72689 01:25:44.557 [2024-12-09 05:20:35.932104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:44.557 [2024-12-09 05:20:35.932326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:44.557 [2024-12-09 05:20:35.932487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:25:44.815 [2024-12-09 05:20:36.330157] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:46.191 ************************************ 01:25:46.191 END TEST raid_superblock_test 01:25:46.191 ************************************ 01:25:46.191 05:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:25:46.191 01:25:46.191 real 0m6.257s 01:25:46.191 user 0m9.246s 01:25:46.191 sys 0m0.897s 01:25:46.191 05:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:46.191 05:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:25:46.191 05:20:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 01:25:46.191 05:20:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:46.191 05:20:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:46.191 05:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:46.191 ************************************ 01:25:46.191 START TEST raid_read_error_test 01:25:46.191 ************************************ 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8KoTT01x6W 01:25:46.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72965 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72965 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72965 ']' 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:46.191 05:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:46.191 [2024-12-09 05:20:37.793155] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:46.191 [2024-12-09 05:20:37.793755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72965 ] 01:25:46.450 [2024-12-09 05:20:37.987682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:46.708 [2024-12-09 05:20:38.140830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:46.966 [2024-12-09 05:20:38.347311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:46.966 [2024-12-09 05:20:38.347580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.224 BaseBdev1_malloc 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.224 true 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.224 [2024-12-09 05:20:38.800034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:25:47.224 [2024-12-09 05:20:38.800265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:47.224 [2024-12-09 05:20:38.800306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:25:47.224 [2024-12-09 05:20:38.800327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:47.224 [2024-12-09 05:20:38.803098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:47.224 [2024-12-09 05:20:38.803162] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:25:47.224 BaseBdev1 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.224 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 BaseBdev2_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 true 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 [2024-12-09 05:20:38.859745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:25:47.482 [2024-12-09 05:20:38.859830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:47.482 [2024-12-09 05:20:38.859856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:25:47.482 [2024-12-09 05:20:38.859874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:47.482 [2024-12-09 05:20:38.862706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:47.482 [2024-12-09 05:20:38.862789] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:25:47.482 BaseBdev2 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 BaseBdev3_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 true 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 [2024-12-09 05:20:38.927137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:25:47.482 [2024-12-09 05:20:38.927201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:47.482 [2024-12-09 05:20:38.927228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:25:47.482 [2024-12-09 05:20:38.927246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:47.482 [2024-12-09 05:20:38.930065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:47.482 [2024-12-09 05:20:38.930127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:25:47.482 BaseBdev3 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 BaseBdev4_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 true 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 [2024-12-09 05:20:38.984981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 01:25:47.482 [2024-12-09 05:20:38.985046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:47.482 [2024-12-09 05:20:38.985073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:25:47.482 [2024-12-09 05:20:38.985091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:47.482 [2024-12-09 05:20:38.987882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:47.482 [2024-12-09 05:20:38.988080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:25:47.482 BaseBdev4 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.482 05:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.482 [2024-12-09 05:20:38.997060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:47.482 [2024-12-09 05:20:38.999461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:47.482 [2024-12-09 05:20:38.999564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:47.482 [2024-12-09 05:20:38.999658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:47.482 [2024-12-09 05:20:38.999961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 01:25:47.482 [2024-12-09 05:20:38.999983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:47.482 [2024-12-09 05:20:39.000263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 01:25:47.482 [2024-12-09 05:20:39.000498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 01:25:47.482 [2024-12-09 05:20:39.000516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 01:25:47.482 [2024-12-09 05:20:39.000708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:47.482 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:47.483 "name": "raid_bdev1", 01:25:47.483 "uuid": "2fb802b8-4aef-436b-9d4d-653445086909", 01:25:47.483 "strip_size_kb": 64, 01:25:47.483 "state": "online", 01:25:47.483 "raid_level": "concat", 01:25:47.483 "superblock": true, 01:25:47.483 "num_base_bdevs": 4, 01:25:47.483 "num_base_bdevs_discovered": 4, 01:25:47.483 "num_base_bdevs_operational": 4, 01:25:47.483 "base_bdevs_list": [ 01:25:47.483 { 01:25:47.483 "name": "BaseBdev1", 01:25:47.483 "uuid": "476d6a49-bbb1-5595-80de-424f678453c7", 01:25:47.483 "is_configured": true, 01:25:47.483 "data_offset": 2048, 01:25:47.483 "data_size": 63488 01:25:47.483 }, 01:25:47.483 { 01:25:47.483 "name": "BaseBdev2", 01:25:47.483 "uuid": "158100a4-e06e-521e-8738-d7a45d982883", 01:25:47.483 "is_configured": true, 01:25:47.483 "data_offset": 2048, 01:25:47.483 "data_size": 63488 01:25:47.483 }, 01:25:47.483 { 01:25:47.483 "name": "BaseBdev3", 01:25:47.483 "uuid": "0b016a2e-2687-56f6-9f52-bab33fbbf55b", 01:25:47.483 "is_configured": true, 01:25:47.483 "data_offset": 2048, 01:25:47.483 "data_size": 63488 01:25:47.483 }, 01:25:47.483 { 01:25:47.483 "name": "BaseBdev4", 01:25:47.483 "uuid": "f6655464-d3e3-54e0-8a5b-bc40ba030190", 01:25:47.483 "is_configured": true, 01:25:47.483 "data_offset": 2048, 01:25:47.483 "data_size": 63488 01:25:47.483 } 01:25:47.483 ] 01:25:47.483 }' 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:47.483 05:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:48.081 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:25:48.081 05:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:25:48.081 [2024-12-09 05:20:39.694859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:49.014 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:49.272 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:49.272 "name": "raid_bdev1", 01:25:49.272 "uuid": "2fb802b8-4aef-436b-9d4d-653445086909", 01:25:49.272 "strip_size_kb": 64, 01:25:49.272 "state": "online", 01:25:49.272 "raid_level": "concat", 01:25:49.272 "superblock": true, 01:25:49.272 "num_base_bdevs": 4, 01:25:49.272 "num_base_bdevs_discovered": 4, 01:25:49.272 "num_base_bdevs_operational": 4, 01:25:49.272 "base_bdevs_list": [ 01:25:49.272 { 01:25:49.272 "name": "BaseBdev1", 01:25:49.272 "uuid": "476d6a49-bbb1-5595-80de-424f678453c7", 01:25:49.272 "is_configured": true, 01:25:49.272 "data_offset": 2048, 01:25:49.272 "data_size": 63488 01:25:49.272 }, 01:25:49.272 { 01:25:49.272 "name": "BaseBdev2", 01:25:49.272 "uuid": "158100a4-e06e-521e-8738-d7a45d982883", 01:25:49.272 "is_configured": true, 01:25:49.272 "data_offset": 2048, 01:25:49.272 "data_size": 63488 01:25:49.272 }, 01:25:49.272 { 01:25:49.272 "name": "BaseBdev3", 01:25:49.272 "uuid": "0b016a2e-2687-56f6-9f52-bab33fbbf55b", 01:25:49.272 "is_configured": true, 01:25:49.272 "data_offset": 2048, 01:25:49.272 "data_size": 63488 01:25:49.272 }, 01:25:49.272 { 01:25:49.272 "name": "BaseBdev4", 01:25:49.272 "uuid": "f6655464-d3e3-54e0-8a5b-bc40ba030190", 01:25:49.272 "is_configured": true, 01:25:49.272 "data_offset": 2048, 01:25:49.272 "data_size": 63488 01:25:49.272 } 01:25:49.272 ] 01:25:49.272 }' 01:25:49.272 05:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:49.272 05:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:49.529 05:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:25:49.529 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:49.529 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:49.529 [2024-12-09 05:20:41.118607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:49.529 [2024-12-09 05:20:41.118809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:49.530 [2024-12-09 05:20:41.122551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:49.530 [2024-12-09 05:20:41.122873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:49.530 { 01:25:49.530 "results": [ 01:25:49.530 { 01:25:49.530 "job": "raid_bdev1", 01:25:49.530 "core_mask": "0x1", 01:25:49.530 "workload": "randrw", 01:25:49.530 "percentage": 50, 01:25:49.530 "status": "finished", 01:25:49.530 "queue_depth": 1, 01:25:49.530 "io_size": 131072, 01:25:49.530 "runtime": 1.421236, 01:25:49.530 "iops": 10116.546442673842, 01:25:49.530 "mibps": 1264.5683053342302, 01:25:49.530 "io_failed": 1, 01:25:49.530 "io_timeout": 0, 01:25:49.530 "avg_latency_us": 138.2833682959366, 01:25:49.530 "min_latency_us": 37.46909090909091, 01:25:49.530 "max_latency_us": 1854.370909090909 01:25:49.530 } 01:25:49.530 ], 01:25:49.530 "core_count": 1 01:25:49.530 } 01:25:49.530 [2024-12-09 05:20:41.123055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:49.530 [2024-12-09 05:20:41.123091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72965 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72965 ']' 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72965 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:49.530 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72965 01:25:49.786 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:49.786 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:49.786 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72965' 01:25:49.786 killing process with pid 72965 01:25:49.786 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72965 01:25:49.786 [2024-12-09 05:20:41.167048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:49.786 05:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72965 01:25:50.044 [2024-12-09 05:20:41.452785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8KoTT01x6W 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 01:25:51.443 ************************************ 01:25:51.443 END TEST raid_read_error_test 01:25:51.443 ************************************ 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 01:25:51.443 01:25:51.443 real 0m4.993s 01:25:51.443 user 0m6.129s 01:25:51.443 sys 0m0.657s 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:51.443 05:20:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:51.443 05:20:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 01:25:51.443 05:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:51.443 05:20:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:51.443 05:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:51.443 ************************************ 01:25:51.443 START TEST raid_write_error_test 01:25:51.443 ************************************ 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Q4CNHdVBZL 01:25:51.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73105 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73105 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73105 ']' 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:51.443 05:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:51.443 [2024-12-09 05:20:42.832474] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:51.443 [2024-12-09 05:20:42.832843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73105 ] 01:25:51.443 [2024-12-09 05:20:43.011779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:51.701 [2024-12-09 05:20:43.146636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:51.959 [2024-12-09 05:20:43.344733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:51.959 [2024-12-09 05:20:43.345130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.217 BaseBdev1_malloc 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.217 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.218 true 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.218 [2024-12-09 05:20:43.812141] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:25:52.218 [2024-12-09 05:20:43.812245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:52.218 [2024-12-09 05:20:43.812284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:25:52.218 [2024-12-09 05:20:43.812308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:52.218 [2024-12-09 05:20:43.816165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:52.218 [2024-12-09 05:20:43.816511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:25:52.218 BaseBdev1 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.218 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.477 BaseBdev2_malloc 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.477 true 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.477 [2024-12-09 05:20:43.894752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:25:52.477 [2024-12-09 05:20:43.894892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:52.477 [2024-12-09 05:20:43.894925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:25:52.477 [2024-12-09 05:20:43.894948] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:52.477 [2024-12-09 05:20:43.898852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:52.477 [2024-12-09 05:20:43.899188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:25:52.477 BaseBdev2 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.477 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 BaseBdev3_malloc 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 true 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 [2024-12-09 05:20:43.983463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:25:52.478 [2024-12-09 05:20:43.983762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:52.478 [2024-12-09 05:20:43.983810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:25:52.478 [2024-12-09 05:20:43.983836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:52.478 [2024-12-09 05:20:43.987719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:52.478 [2024-12-09 05:20:43.987778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:25:52.478 BaseBdev3 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 BaseBdev4_malloc 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 true 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 [2024-12-09 05:20:44.060171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 01:25:52.478 [2024-12-09 05:20:44.060262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:25:52.478 [2024-12-09 05:20:44.060300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:25:52.478 [2024-12-09 05:20:44.060326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:25:52.478 [2024-12-09 05:20:44.063424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:25:52.478 [2024-12-09 05:20:44.063472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:25:52.478 BaseBdev4 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.478 [2024-12-09 05:20:44.072277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:52.478 [2024-12-09 05:20:44.074816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:52.478 [2024-12-09 05:20:44.075126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:52.478 [2024-12-09 05:20:44.075238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:25:52.478 [2024-12-09 05:20:44.075594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 01:25:52.478 [2024-12-09 05:20:44.075618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 01:25:52.478 [2024-12-09 05:20:44.075908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 01:25:52.478 [2024-12-09 05:20:44.076112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 01:25:52.478 [2024-12-09 05:20:44.076131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 01:25:52.478 [2024-12-09 05:20:44.076373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:52.478 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:52.737 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:52.737 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:52.737 "name": "raid_bdev1", 01:25:52.737 "uuid": "cf8deeab-8194-4716-a78c-ef16e1129c39", 01:25:52.737 "strip_size_kb": 64, 01:25:52.737 "state": "online", 01:25:52.737 "raid_level": "concat", 01:25:52.737 "superblock": true, 01:25:52.737 "num_base_bdevs": 4, 01:25:52.737 "num_base_bdevs_discovered": 4, 01:25:52.737 "num_base_bdevs_operational": 4, 01:25:52.737 "base_bdevs_list": [ 01:25:52.737 { 01:25:52.737 "name": "BaseBdev1", 01:25:52.737 "uuid": "551b8528-2442-5522-9755-439621f9f604", 01:25:52.737 "is_configured": true, 01:25:52.737 "data_offset": 2048, 01:25:52.737 "data_size": 63488 01:25:52.737 }, 01:25:52.737 { 01:25:52.737 "name": "BaseBdev2", 01:25:52.737 "uuid": "965cb00a-3e51-5e1d-8a28-96a12455167a", 01:25:52.737 "is_configured": true, 01:25:52.737 "data_offset": 2048, 01:25:52.737 "data_size": 63488 01:25:52.737 }, 01:25:52.737 { 01:25:52.737 "name": "BaseBdev3", 01:25:52.737 "uuid": "d4e01a2f-9eba-530b-a633-bc86beeaf711", 01:25:52.737 "is_configured": true, 01:25:52.737 "data_offset": 2048, 01:25:52.737 "data_size": 63488 01:25:52.737 }, 01:25:52.737 { 01:25:52.737 "name": "BaseBdev4", 01:25:52.737 "uuid": "5628af83-6787-505f-8319-7e0b0db9e5fb", 01:25:52.737 "is_configured": true, 01:25:52.737 "data_offset": 2048, 01:25:52.737 "data_size": 63488 01:25:52.737 } 01:25:52.737 ] 01:25:52.737 }' 01:25:52.737 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:52.737 05:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:53.304 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:25:53.304 05:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:25:53.304 [2024-12-09 05:20:44.746091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:54.263 "name": "raid_bdev1", 01:25:54.263 "uuid": "cf8deeab-8194-4716-a78c-ef16e1129c39", 01:25:54.263 "strip_size_kb": 64, 01:25:54.263 "state": "online", 01:25:54.263 "raid_level": "concat", 01:25:54.263 "superblock": true, 01:25:54.263 "num_base_bdevs": 4, 01:25:54.263 "num_base_bdevs_discovered": 4, 01:25:54.263 "num_base_bdevs_operational": 4, 01:25:54.263 "base_bdevs_list": [ 01:25:54.263 { 01:25:54.263 "name": "BaseBdev1", 01:25:54.263 "uuid": "551b8528-2442-5522-9755-439621f9f604", 01:25:54.263 "is_configured": true, 01:25:54.263 "data_offset": 2048, 01:25:54.263 "data_size": 63488 01:25:54.263 }, 01:25:54.263 { 01:25:54.263 "name": "BaseBdev2", 01:25:54.263 "uuid": "965cb00a-3e51-5e1d-8a28-96a12455167a", 01:25:54.263 "is_configured": true, 01:25:54.263 "data_offset": 2048, 01:25:54.263 "data_size": 63488 01:25:54.263 }, 01:25:54.263 { 01:25:54.263 "name": "BaseBdev3", 01:25:54.263 "uuid": "d4e01a2f-9eba-530b-a633-bc86beeaf711", 01:25:54.263 "is_configured": true, 01:25:54.263 "data_offset": 2048, 01:25:54.263 "data_size": 63488 01:25:54.263 }, 01:25:54.263 { 01:25:54.263 "name": "BaseBdev4", 01:25:54.263 "uuid": "5628af83-6787-505f-8319-7e0b0db9e5fb", 01:25:54.263 "is_configured": true, 01:25:54.263 "data_offset": 2048, 01:25:54.263 "data_size": 63488 01:25:54.263 } 01:25:54.263 ] 01:25:54.263 }' 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:54.263 05:20:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:54.829 [2024-12-09 05:20:46.157157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:25:54.829 [2024-12-09 05:20:46.157198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:25:54.829 [2024-12-09 05:20:46.160891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:25:54.829 [2024-12-09 05:20:46.160970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:25:54.829 [2024-12-09 05:20:46.161033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:25:54.829 [2024-12-09 05:20:46.161055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 01:25:54.829 { 01:25:54.829 "results": [ 01:25:54.829 { 01:25:54.829 "job": "raid_bdev1", 01:25:54.829 "core_mask": "0x1", 01:25:54.829 "workload": "randrw", 01:25:54.829 "percentage": 50, 01:25:54.829 "status": "finished", 01:25:54.829 "queue_depth": 1, 01:25:54.829 "io_size": 131072, 01:25:54.829 "runtime": 1.408249, 01:25:54.829 "iops": 9758.217474324498, 01:25:54.829 "mibps": 1219.7771842905622, 01:25:54.829 "io_failed": 1, 01:25:54.829 "io_timeout": 0, 01:25:54.829 "avg_latency_us": 144.21066619039112, 01:25:54.829 "min_latency_us": 38.86545454545455, 01:25:54.829 "max_latency_us": 1936.290909090909 01:25:54.829 } 01:25:54.829 ], 01:25:54.829 "core_count": 1 01:25:54.829 } 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73105 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73105 ']' 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73105 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73105 01:25:54.829 killing process with pid 73105 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73105' 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73105 01:25:54.829 [2024-12-09 05:20:46.198490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:25:54.829 05:20:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73105 01:25:55.087 [2024-12-09 05:20:46.479420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Q4CNHdVBZL 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 01:25:56.460 01:25:56.460 real 0m4.970s 01:25:56.460 user 0m6.022s 01:25:56.460 sys 0m0.635s 01:25:56.460 ************************************ 01:25:56.460 END TEST raid_write_error_test 01:25:56.460 ************************************ 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:56.460 05:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:25:56.460 05:20:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 01:25:56.461 05:20:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 01:25:56.461 05:20:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:25:56.461 05:20:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:56.461 05:20:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:25:56.461 ************************************ 01:25:56.461 START TEST raid_state_function_test 01:25:56.461 ************************************ 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73254 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:25:56.461 Process raid pid: 73254 01:25:56.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73254' 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73254 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73254 ']' 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:56.461 05:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:56.461 [2024-12-09 05:20:47.858423] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:56.461 [2024-12-09 05:20:47.858609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:56.461 [2024-12-09 05:20:48.039350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:56.718 [2024-12-09 05:20:48.184325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:56.975 [2024-12-09 05:20:48.457200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:56.975 [2024-12-09 05:20:48.457249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:57.233 [2024-12-09 05:20:48.833042] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:57.233 [2024-12-09 05:20:48.833142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:57.233 [2024-12-09 05:20:48.833160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:57.233 [2024-12-09 05:20:48.833176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:57.233 [2024-12-09 05:20:48.833186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:57.233 [2024-12-09 05:20:48.833201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:57.233 [2024-12-09 05:20:48.833210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:57.233 [2024-12-09 05:20:48.833225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:57.233 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:57.489 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:57.489 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:57.489 "name": "Existed_Raid", 01:25:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:57.489 "strip_size_kb": 0, 01:25:57.489 "state": "configuring", 01:25:57.489 "raid_level": "raid1", 01:25:57.489 "superblock": false, 01:25:57.489 "num_base_bdevs": 4, 01:25:57.489 "num_base_bdevs_discovered": 0, 01:25:57.489 "num_base_bdevs_operational": 4, 01:25:57.489 "base_bdevs_list": [ 01:25:57.489 { 01:25:57.489 "name": "BaseBdev1", 01:25:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:57.489 "is_configured": false, 01:25:57.489 "data_offset": 0, 01:25:57.489 "data_size": 0 01:25:57.489 }, 01:25:57.489 { 01:25:57.489 "name": "BaseBdev2", 01:25:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:57.489 "is_configured": false, 01:25:57.489 "data_offset": 0, 01:25:57.489 "data_size": 0 01:25:57.489 }, 01:25:57.489 { 01:25:57.489 "name": "BaseBdev3", 01:25:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:57.489 "is_configured": false, 01:25:57.489 "data_offset": 0, 01:25:57.489 "data_size": 0 01:25:57.489 }, 01:25:57.489 { 01:25:57.489 "name": "BaseBdev4", 01:25:57.489 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:57.489 "is_configured": false, 01:25:57.489 "data_offset": 0, 01:25:57.489 "data_size": 0 01:25:57.489 } 01:25:57.489 ] 01:25:57.489 }' 01:25:57.489 05:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:57.489 05:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:57.751 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:57.751 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:57.751 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:57.751 [2024-12-09 05:20:49.357199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:57.751 [2024-12-09 05:20:49.357278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:25:57.751 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:57.751 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.008 [2024-12-09 05:20:49.369148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:25:58.008 [2024-12-09 05:20:49.369429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:25:58.008 [2024-12-09 05:20:49.369470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:58.008 [2024-12-09 05:20:49.369512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:58.008 [2024-12-09 05:20:49.369525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:58.008 [2024-12-09 05:20:49.369544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:58.008 [2024-12-09 05:20:49.369554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:58.008 [2024-12-09 05:20:49.369571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.008 [2024-12-09 05:20:49.426197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:58.008 BaseBdev1 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.008 [ 01:25:58.008 { 01:25:58.008 "name": "BaseBdev1", 01:25:58.008 "aliases": [ 01:25:58.008 "e4e11f41-6625-4e97-9b82-09cbc7b7aa10" 01:25:58.008 ], 01:25:58.008 "product_name": "Malloc disk", 01:25:58.008 "block_size": 512, 01:25:58.008 "num_blocks": 65536, 01:25:58.008 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:25:58.008 "assigned_rate_limits": { 01:25:58.008 "rw_ios_per_sec": 0, 01:25:58.008 "rw_mbytes_per_sec": 0, 01:25:58.008 "r_mbytes_per_sec": 0, 01:25:58.008 "w_mbytes_per_sec": 0 01:25:58.008 }, 01:25:58.008 "claimed": true, 01:25:58.008 "claim_type": "exclusive_write", 01:25:58.008 "zoned": false, 01:25:58.008 "supported_io_types": { 01:25:58.008 "read": true, 01:25:58.008 "write": true, 01:25:58.008 "unmap": true, 01:25:58.008 "flush": true, 01:25:58.008 "reset": true, 01:25:58.008 "nvme_admin": false, 01:25:58.008 "nvme_io": false, 01:25:58.008 "nvme_io_md": false, 01:25:58.008 "write_zeroes": true, 01:25:58.008 "zcopy": true, 01:25:58.008 "get_zone_info": false, 01:25:58.008 "zone_management": false, 01:25:58.008 "zone_append": false, 01:25:58.008 "compare": false, 01:25:58.008 "compare_and_write": false, 01:25:58.008 "abort": true, 01:25:58.008 "seek_hole": false, 01:25:58.008 "seek_data": false, 01:25:58.008 "copy": true, 01:25:58.008 "nvme_iov_md": false 01:25:58.008 }, 01:25:58.008 "memory_domains": [ 01:25:58.008 { 01:25:58.008 "dma_device_id": "system", 01:25:58.008 "dma_device_type": 1 01:25:58.008 }, 01:25:58.008 { 01:25:58.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:58.008 "dma_device_type": 2 01:25:58.008 } 01:25:58.008 ], 01:25:58.008 "driver_specific": {} 01:25:58.008 } 01:25:58.008 ] 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.008 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.009 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.009 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:58.009 "name": "Existed_Raid", 01:25:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.009 "strip_size_kb": 0, 01:25:58.009 "state": "configuring", 01:25:58.009 "raid_level": "raid1", 01:25:58.009 "superblock": false, 01:25:58.009 "num_base_bdevs": 4, 01:25:58.009 "num_base_bdevs_discovered": 1, 01:25:58.009 "num_base_bdevs_operational": 4, 01:25:58.009 "base_bdevs_list": [ 01:25:58.009 { 01:25:58.009 "name": "BaseBdev1", 01:25:58.009 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:25:58.009 "is_configured": true, 01:25:58.009 "data_offset": 0, 01:25:58.009 "data_size": 65536 01:25:58.009 }, 01:25:58.009 { 01:25:58.009 "name": "BaseBdev2", 01:25:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.009 "is_configured": false, 01:25:58.009 "data_offset": 0, 01:25:58.009 "data_size": 0 01:25:58.009 }, 01:25:58.009 { 01:25:58.009 "name": "BaseBdev3", 01:25:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.009 "is_configured": false, 01:25:58.009 "data_offset": 0, 01:25:58.009 "data_size": 0 01:25:58.009 }, 01:25:58.009 { 01:25:58.009 "name": "BaseBdev4", 01:25:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.009 "is_configured": false, 01:25:58.009 "data_offset": 0, 01:25:58.009 "data_size": 0 01:25:58.009 } 01:25:58.009 ] 01:25:58.009 }' 01:25:58.009 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:58.009 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.574 [2024-12-09 05:20:49.986443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:25:58.574 [2024-12-09 05:20:49.986781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.574 [2024-12-09 05:20:49.994437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:25:58.574 [2024-12-09 05:20:49.997237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:25:58.574 [2024-12-09 05:20:49.997290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:25:58.574 [2024-12-09 05:20:49.997307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:25:58.574 [2024-12-09 05:20:49.997323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:25:58.574 [2024-12-09 05:20:49.997333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:25:58.574 [2024-12-09 05:20:49.997346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:58.574 05:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:58.574 "name": "Existed_Raid", 01:25:58.574 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.574 "strip_size_kb": 0, 01:25:58.574 "state": "configuring", 01:25:58.574 "raid_level": "raid1", 01:25:58.574 "superblock": false, 01:25:58.574 "num_base_bdevs": 4, 01:25:58.574 "num_base_bdevs_discovered": 1, 01:25:58.574 "num_base_bdevs_operational": 4, 01:25:58.574 "base_bdevs_list": [ 01:25:58.574 { 01:25:58.574 "name": "BaseBdev1", 01:25:58.574 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:25:58.574 "is_configured": true, 01:25:58.574 "data_offset": 0, 01:25:58.574 "data_size": 65536 01:25:58.574 }, 01:25:58.574 { 01:25:58.574 "name": "BaseBdev2", 01:25:58.574 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.574 "is_configured": false, 01:25:58.574 "data_offset": 0, 01:25:58.574 "data_size": 0 01:25:58.574 }, 01:25:58.574 { 01:25:58.574 "name": "BaseBdev3", 01:25:58.574 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.574 "is_configured": false, 01:25:58.574 "data_offset": 0, 01:25:58.574 "data_size": 0 01:25:58.574 }, 01:25:58.574 { 01:25:58.574 "name": "BaseBdev4", 01:25:58.574 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:58.574 "is_configured": false, 01:25:58.574 "data_offset": 0, 01:25:58.574 "data_size": 0 01:25:58.574 } 01:25:58.574 ] 01:25:58.574 }' 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:58.574 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.142 [2024-12-09 05:20:50.553198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:25:59.142 BaseBdev2 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.142 [ 01:25:59.142 { 01:25:59.142 "name": "BaseBdev2", 01:25:59.142 "aliases": [ 01:25:59.142 "f39dec9f-6797-4eb6-b117-46e3e125eade" 01:25:59.142 ], 01:25:59.142 "product_name": "Malloc disk", 01:25:59.142 "block_size": 512, 01:25:59.142 "num_blocks": 65536, 01:25:59.142 "uuid": "f39dec9f-6797-4eb6-b117-46e3e125eade", 01:25:59.142 "assigned_rate_limits": { 01:25:59.142 "rw_ios_per_sec": 0, 01:25:59.142 "rw_mbytes_per_sec": 0, 01:25:59.142 "r_mbytes_per_sec": 0, 01:25:59.142 "w_mbytes_per_sec": 0 01:25:59.142 }, 01:25:59.142 "claimed": true, 01:25:59.142 "claim_type": "exclusive_write", 01:25:59.142 "zoned": false, 01:25:59.142 "supported_io_types": { 01:25:59.142 "read": true, 01:25:59.142 "write": true, 01:25:59.142 "unmap": true, 01:25:59.142 "flush": true, 01:25:59.142 "reset": true, 01:25:59.142 "nvme_admin": false, 01:25:59.142 "nvme_io": false, 01:25:59.142 "nvme_io_md": false, 01:25:59.142 "write_zeroes": true, 01:25:59.142 "zcopy": true, 01:25:59.142 "get_zone_info": false, 01:25:59.142 "zone_management": false, 01:25:59.142 "zone_append": false, 01:25:59.142 "compare": false, 01:25:59.142 "compare_and_write": false, 01:25:59.142 "abort": true, 01:25:59.142 "seek_hole": false, 01:25:59.142 "seek_data": false, 01:25:59.142 "copy": true, 01:25:59.142 "nvme_iov_md": false 01:25:59.142 }, 01:25:59.142 "memory_domains": [ 01:25:59.142 { 01:25:59.142 "dma_device_id": "system", 01:25:59.142 "dma_device_type": 1 01:25:59.142 }, 01:25:59.142 { 01:25:59.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:59.142 "dma_device_type": 2 01:25:59.142 } 01:25:59.142 ], 01:25:59.142 "driver_specific": {} 01:25:59.142 } 01:25:59.142 ] 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:59.142 "name": "Existed_Raid", 01:25:59.142 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:59.142 "strip_size_kb": 0, 01:25:59.142 "state": "configuring", 01:25:59.142 "raid_level": "raid1", 01:25:59.142 "superblock": false, 01:25:59.142 "num_base_bdevs": 4, 01:25:59.142 "num_base_bdevs_discovered": 2, 01:25:59.142 "num_base_bdevs_operational": 4, 01:25:59.142 "base_bdevs_list": [ 01:25:59.142 { 01:25:59.142 "name": "BaseBdev1", 01:25:59.142 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:25:59.142 "is_configured": true, 01:25:59.142 "data_offset": 0, 01:25:59.142 "data_size": 65536 01:25:59.142 }, 01:25:59.142 { 01:25:59.142 "name": "BaseBdev2", 01:25:59.142 "uuid": "f39dec9f-6797-4eb6-b117-46e3e125eade", 01:25:59.142 "is_configured": true, 01:25:59.142 "data_offset": 0, 01:25:59.142 "data_size": 65536 01:25:59.142 }, 01:25:59.142 { 01:25:59.142 "name": "BaseBdev3", 01:25:59.142 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:59.142 "is_configured": false, 01:25:59.142 "data_offset": 0, 01:25:59.142 "data_size": 0 01:25:59.142 }, 01:25:59.142 { 01:25:59.142 "name": "BaseBdev4", 01:25:59.142 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:59.142 "is_configured": false, 01:25:59.142 "data_offset": 0, 01:25:59.142 "data_size": 0 01:25:59.142 } 01:25:59.142 ] 01:25:59.142 }' 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:59.142 05:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.710 [2024-12-09 05:20:51.140833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:25:59.710 BaseBdev3 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.710 [ 01:25:59.710 { 01:25:59.710 "name": "BaseBdev3", 01:25:59.710 "aliases": [ 01:25:59.710 "465f983d-6a63-4457-af4c-37ef3e1afdd0" 01:25:59.710 ], 01:25:59.710 "product_name": "Malloc disk", 01:25:59.710 "block_size": 512, 01:25:59.710 "num_blocks": 65536, 01:25:59.710 "uuid": "465f983d-6a63-4457-af4c-37ef3e1afdd0", 01:25:59.710 "assigned_rate_limits": { 01:25:59.710 "rw_ios_per_sec": 0, 01:25:59.710 "rw_mbytes_per_sec": 0, 01:25:59.710 "r_mbytes_per_sec": 0, 01:25:59.710 "w_mbytes_per_sec": 0 01:25:59.710 }, 01:25:59.710 "claimed": true, 01:25:59.710 "claim_type": "exclusive_write", 01:25:59.710 "zoned": false, 01:25:59.710 "supported_io_types": { 01:25:59.710 "read": true, 01:25:59.710 "write": true, 01:25:59.710 "unmap": true, 01:25:59.710 "flush": true, 01:25:59.710 "reset": true, 01:25:59.710 "nvme_admin": false, 01:25:59.710 "nvme_io": false, 01:25:59.710 "nvme_io_md": false, 01:25:59.710 "write_zeroes": true, 01:25:59.710 "zcopy": true, 01:25:59.710 "get_zone_info": false, 01:25:59.710 "zone_management": false, 01:25:59.710 "zone_append": false, 01:25:59.710 "compare": false, 01:25:59.710 "compare_and_write": false, 01:25:59.710 "abort": true, 01:25:59.710 "seek_hole": false, 01:25:59.710 "seek_data": false, 01:25:59.710 "copy": true, 01:25:59.710 "nvme_iov_md": false 01:25:59.710 }, 01:25:59.710 "memory_domains": [ 01:25:59.710 { 01:25:59.710 "dma_device_id": "system", 01:25:59.710 "dma_device_type": 1 01:25:59.710 }, 01:25:59.710 { 01:25:59.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:25:59.710 "dma_device_type": 2 01:25:59.710 } 01:25:59.710 ], 01:25:59.710 "driver_specific": {} 01:25:59.710 } 01:25:59.710 ] 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:25:59.710 "name": "Existed_Raid", 01:25:59.710 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:59.710 "strip_size_kb": 0, 01:25:59.710 "state": "configuring", 01:25:59.710 "raid_level": "raid1", 01:25:59.710 "superblock": false, 01:25:59.710 "num_base_bdevs": 4, 01:25:59.710 "num_base_bdevs_discovered": 3, 01:25:59.710 "num_base_bdevs_operational": 4, 01:25:59.710 "base_bdevs_list": [ 01:25:59.710 { 01:25:59.710 "name": "BaseBdev1", 01:25:59.710 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:25:59.710 "is_configured": true, 01:25:59.710 "data_offset": 0, 01:25:59.710 "data_size": 65536 01:25:59.710 }, 01:25:59.710 { 01:25:59.710 "name": "BaseBdev2", 01:25:59.710 "uuid": "f39dec9f-6797-4eb6-b117-46e3e125eade", 01:25:59.710 "is_configured": true, 01:25:59.710 "data_offset": 0, 01:25:59.710 "data_size": 65536 01:25:59.710 }, 01:25:59.710 { 01:25:59.710 "name": "BaseBdev3", 01:25:59.710 "uuid": "465f983d-6a63-4457-af4c-37ef3e1afdd0", 01:25:59.710 "is_configured": true, 01:25:59.710 "data_offset": 0, 01:25:59.710 "data_size": 65536 01:25:59.710 }, 01:25:59.710 { 01:25:59.710 "name": "BaseBdev4", 01:25:59.710 "uuid": "00000000-0000-0000-0000-000000000000", 01:25:59.710 "is_configured": false, 01:25:59.710 "data_offset": 0, 01:25:59.710 "data_size": 0 01:25:59.710 } 01:25:59.710 ] 01:25:59.710 }' 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:25:59.710 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.279 [2024-12-09 05:20:51.759877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:26:00.279 [2024-12-09 05:20:51.760251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:26:00.279 [2024-12-09 05:20:51.760275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:26:00.279 [2024-12-09 05:20:51.760689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:26:00.279 [2024-12-09 05:20:51.760950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:26:00.279 [2024-12-09 05:20:51.760973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:26:00.279 [2024-12-09 05:20:51.761324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:00.279 BaseBdev4 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.279 [ 01:26:00.279 { 01:26:00.279 "name": "BaseBdev4", 01:26:00.279 "aliases": [ 01:26:00.279 "8d256c60-f50f-43c5-b12e-a41da6ac1b9b" 01:26:00.279 ], 01:26:00.279 "product_name": "Malloc disk", 01:26:00.279 "block_size": 512, 01:26:00.279 "num_blocks": 65536, 01:26:00.279 "uuid": "8d256c60-f50f-43c5-b12e-a41da6ac1b9b", 01:26:00.279 "assigned_rate_limits": { 01:26:00.279 "rw_ios_per_sec": 0, 01:26:00.279 "rw_mbytes_per_sec": 0, 01:26:00.279 "r_mbytes_per_sec": 0, 01:26:00.279 "w_mbytes_per_sec": 0 01:26:00.279 }, 01:26:00.279 "claimed": true, 01:26:00.279 "claim_type": "exclusive_write", 01:26:00.279 "zoned": false, 01:26:00.279 "supported_io_types": { 01:26:00.279 "read": true, 01:26:00.279 "write": true, 01:26:00.279 "unmap": true, 01:26:00.279 "flush": true, 01:26:00.279 "reset": true, 01:26:00.279 "nvme_admin": false, 01:26:00.279 "nvme_io": false, 01:26:00.279 "nvme_io_md": false, 01:26:00.279 "write_zeroes": true, 01:26:00.279 "zcopy": true, 01:26:00.279 "get_zone_info": false, 01:26:00.279 "zone_management": false, 01:26:00.279 "zone_append": false, 01:26:00.279 "compare": false, 01:26:00.279 "compare_and_write": false, 01:26:00.279 "abort": true, 01:26:00.279 "seek_hole": false, 01:26:00.279 "seek_data": false, 01:26:00.279 "copy": true, 01:26:00.279 "nvme_iov_md": false 01:26:00.279 }, 01:26:00.279 "memory_domains": [ 01:26:00.279 { 01:26:00.279 "dma_device_id": "system", 01:26:00.279 "dma_device_type": 1 01:26:00.279 }, 01:26:00.279 { 01:26:00.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:00.279 "dma_device_type": 2 01:26:00.279 } 01:26:00.279 ], 01:26:00.279 "driver_specific": {} 01:26:00.279 } 01:26:00.279 ] 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:00.279 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:00.279 "name": "Existed_Raid", 01:26:00.279 "uuid": "d273cb41-5dec-4285-a08f-57392bd3ac91", 01:26:00.279 "strip_size_kb": 0, 01:26:00.279 "state": "online", 01:26:00.279 "raid_level": "raid1", 01:26:00.279 "superblock": false, 01:26:00.280 "num_base_bdevs": 4, 01:26:00.280 "num_base_bdevs_discovered": 4, 01:26:00.280 "num_base_bdevs_operational": 4, 01:26:00.280 "base_bdevs_list": [ 01:26:00.280 { 01:26:00.280 "name": "BaseBdev1", 01:26:00.280 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:26:00.280 "is_configured": true, 01:26:00.280 "data_offset": 0, 01:26:00.280 "data_size": 65536 01:26:00.280 }, 01:26:00.280 { 01:26:00.280 "name": "BaseBdev2", 01:26:00.280 "uuid": "f39dec9f-6797-4eb6-b117-46e3e125eade", 01:26:00.280 "is_configured": true, 01:26:00.280 "data_offset": 0, 01:26:00.280 "data_size": 65536 01:26:00.280 }, 01:26:00.280 { 01:26:00.280 "name": "BaseBdev3", 01:26:00.280 "uuid": "465f983d-6a63-4457-af4c-37ef3e1afdd0", 01:26:00.280 "is_configured": true, 01:26:00.280 "data_offset": 0, 01:26:00.280 "data_size": 65536 01:26:00.280 }, 01:26:00.280 { 01:26:00.280 "name": "BaseBdev4", 01:26:00.280 "uuid": "8d256c60-f50f-43c5-b12e-a41da6ac1b9b", 01:26:00.280 "is_configured": true, 01:26:00.280 "data_offset": 0, 01:26:00.280 "data_size": 65536 01:26:00.280 } 01:26:00.280 ] 01:26:00.280 }' 01:26:00.280 05:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:00.280 05:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:00.847 [2024-12-09 05:20:52.324550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:26:00.847 "name": "Existed_Raid", 01:26:00.847 "aliases": [ 01:26:00.847 "d273cb41-5dec-4285-a08f-57392bd3ac91" 01:26:00.847 ], 01:26:00.847 "product_name": "Raid Volume", 01:26:00.847 "block_size": 512, 01:26:00.847 "num_blocks": 65536, 01:26:00.847 "uuid": "d273cb41-5dec-4285-a08f-57392bd3ac91", 01:26:00.847 "assigned_rate_limits": { 01:26:00.847 "rw_ios_per_sec": 0, 01:26:00.847 "rw_mbytes_per_sec": 0, 01:26:00.847 "r_mbytes_per_sec": 0, 01:26:00.847 "w_mbytes_per_sec": 0 01:26:00.847 }, 01:26:00.847 "claimed": false, 01:26:00.847 "zoned": false, 01:26:00.847 "supported_io_types": { 01:26:00.847 "read": true, 01:26:00.847 "write": true, 01:26:00.847 "unmap": false, 01:26:00.847 "flush": false, 01:26:00.847 "reset": true, 01:26:00.847 "nvme_admin": false, 01:26:00.847 "nvme_io": false, 01:26:00.847 "nvme_io_md": false, 01:26:00.847 "write_zeroes": true, 01:26:00.847 "zcopy": false, 01:26:00.847 "get_zone_info": false, 01:26:00.847 "zone_management": false, 01:26:00.847 "zone_append": false, 01:26:00.847 "compare": false, 01:26:00.847 "compare_and_write": false, 01:26:00.847 "abort": false, 01:26:00.847 "seek_hole": false, 01:26:00.847 "seek_data": false, 01:26:00.847 "copy": false, 01:26:00.847 "nvme_iov_md": false 01:26:00.847 }, 01:26:00.847 "memory_domains": [ 01:26:00.847 { 01:26:00.847 "dma_device_id": "system", 01:26:00.847 "dma_device_type": 1 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:00.847 "dma_device_type": 2 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "system", 01:26:00.847 "dma_device_type": 1 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:00.847 "dma_device_type": 2 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "system", 01:26:00.847 "dma_device_type": 1 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:00.847 "dma_device_type": 2 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "system", 01:26:00.847 "dma_device_type": 1 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:00.847 "dma_device_type": 2 01:26:00.847 } 01:26:00.847 ], 01:26:00.847 "driver_specific": { 01:26:00.847 "raid": { 01:26:00.847 "uuid": "d273cb41-5dec-4285-a08f-57392bd3ac91", 01:26:00.847 "strip_size_kb": 0, 01:26:00.847 "state": "online", 01:26:00.847 "raid_level": "raid1", 01:26:00.847 "superblock": false, 01:26:00.847 "num_base_bdevs": 4, 01:26:00.847 "num_base_bdevs_discovered": 4, 01:26:00.847 "num_base_bdevs_operational": 4, 01:26:00.847 "base_bdevs_list": [ 01:26:00.847 { 01:26:00.847 "name": "BaseBdev1", 01:26:00.847 "uuid": "e4e11f41-6625-4e97-9b82-09cbc7b7aa10", 01:26:00.847 "is_configured": true, 01:26:00.847 "data_offset": 0, 01:26:00.847 "data_size": 65536 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "name": "BaseBdev2", 01:26:00.847 "uuid": "f39dec9f-6797-4eb6-b117-46e3e125eade", 01:26:00.847 "is_configured": true, 01:26:00.847 "data_offset": 0, 01:26:00.847 "data_size": 65536 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "name": "BaseBdev3", 01:26:00.847 "uuid": "465f983d-6a63-4457-af4c-37ef3e1afdd0", 01:26:00.847 "is_configured": true, 01:26:00.847 "data_offset": 0, 01:26:00.847 "data_size": 65536 01:26:00.847 }, 01:26:00.847 { 01:26:00.847 "name": "BaseBdev4", 01:26:00.847 "uuid": "8d256c60-f50f-43c5-b12e-a41da6ac1b9b", 01:26:00.847 "is_configured": true, 01:26:00.847 "data_offset": 0, 01:26:00.847 "data_size": 65536 01:26:00.847 } 01:26:00.847 ] 01:26:00.847 } 01:26:00.847 } 01:26:00.847 }' 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:26:00.847 BaseBdev2 01:26:00.847 BaseBdev3 01:26:00.847 BaseBdev4' 01:26:00.847 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.106 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.106 [2024-12-09 05:20:52.696332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:01.366 "name": "Existed_Raid", 01:26:01.366 "uuid": "d273cb41-5dec-4285-a08f-57392bd3ac91", 01:26:01.366 "strip_size_kb": 0, 01:26:01.366 "state": "online", 01:26:01.366 "raid_level": "raid1", 01:26:01.366 "superblock": false, 01:26:01.366 "num_base_bdevs": 4, 01:26:01.366 "num_base_bdevs_discovered": 3, 01:26:01.366 "num_base_bdevs_operational": 3, 01:26:01.366 "base_bdevs_list": [ 01:26:01.366 { 01:26:01.366 "name": null, 01:26:01.366 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:01.366 "is_configured": false, 01:26:01.366 "data_offset": 0, 01:26:01.366 "data_size": 65536 01:26:01.366 }, 01:26:01.366 { 01:26:01.366 "name": "BaseBdev2", 01:26:01.366 "uuid": "f39dec9f-6797-4eb6-b117-46e3e125eade", 01:26:01.366 "is_configured": true, 01:26:01.366 "data_offset": 0, 01:26:01.366 "data_size": 65536 01:26:01.366 }, 01:26:01.366 { 01:26:01.366 "name": "BaseBdev3", 01:26:01.366 "uuid": "465f983d-6a63-4457-af4c-37ef3e1afdd0", 01:26:01.366 "is_configured": true, 01:26:01.366 "data_offset": 0, 01:26:01.366 "data_size": 65536 01:26:01.366 }, 01:26:01.366 { 01:26:01.366 "name": "BaseBdev4", 01:26:01.366 "uuid": "8d256c60-f50f-43c5-b12e-a41da6ac1b9b", 01:26:01.366 "is_configured": true, 01:26:01.366 "data_offset": 0, 01:26:01.366 "data_size": 65536 01:26:01.366 } 01:26:01.366 ] 01:26:01.366 }' 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:01.366 05:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.934 [2024-12-09 05:20:53.374146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.934 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:01.935 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:01.935 [2024-12-09 05:20:53.528184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.193 [2024-12-09 05:20:53.663759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:26:02.193 [2024-12-09 05:20:53.663890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:02.193 [2024-12-09 05:20:53.741000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:02.193 [2024-12-09 05:20:53.741407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:02.193 [2024-12-09 05:20:53.741601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:26:02.193 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.194 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.453 BaseBdev2 01:26:02.453 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.453 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 [ 01:26:02.454 { 01:26:02.454 "name": "BaseBdev2", 01:26:02.454 "aliases": [ 01:26:02.454 "e4204166-5c3c-4bc7-b4c2-98c482c475c6" 01:26:02.454 ], 01:26:02.454 "product_name": "Malloc disk", 01:26:02.454 "block_size": 512, 01:26:02.454 "num_blocks": 65536, 01:26:02.454 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:02.454 "assigned_rate_limits": { 01:26:02.454 "rw_ios_per_sec": 0, 01:26:02.454 "rw_mbytes_per_sec": 0, 01:26:02.454 "r_mbytes_per_sec": 0, 01:26:02.454 "w_mbytes_per_sec": 0 01:26:02.454 }, 01:26:02.454 "claimed": false, 01:26:02.454 "zoned": false, 01:26:02.454 "supported_io_types": { 01:26:02.454 "read": true, 01:26:02.454 "write": true, 01:26:02.454 "unmap": true, 01:26:02.454 "flush": true, 01:26:02.454 "reset": true, 01:26:02.454 "nvme_admin": false, 01:26:02.454 "nvme_io": false, 01:26:02.454 "nvme_io_md": false, 01:26:02.454 "write_zeroes": true, 01:26:02.454 "zcopy": true, 01:26:02.454 "get_zone_info": false, 01:26:02.454 "zone_management": false, 01:26:02.454 "zone_append": false, 01:26:02.454 "compare": false, 01:26:02.454 "compare_and_write": false, 01:26:02.454 "abort": true, 01:26:02.454 "seek_hole": false, 01:26:02.454 "seek_data": false, 01:26:02.454 "copy": true, 01:26:02.454 "nvme_iov_md": false 01:26:02.454 }, 01:26:02.454 "memory_domains": [ 01:26:02.454 { 01:26:02.454 "dma_device_id": "system", 01:26:02.454 "dma_device_type": 1 01:26:02.454 }, 01:26:02.454 { 01:26:02.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:02.454 "dma_device_type": 2 01:26:02.454 } 01:26:02.454 ], 01:26:02.454 "driver_specific": {} 01:26:02.454 } 01:26:02.454 ] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 BaseBdev3 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 [ 01:26:02.454 { 01:26:02.454 "name": "BaseBdev3", 01:26:02.454 "aliases": [ 01:26:02.454 "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6" 01:26:02.454 ], 01:26:02.454 "product_name": "Malloc disk", 01:26:02.454 "block_size": 512, 01:26:02.454 "num_blocks": 65536, 01:26:02.454 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:02.454 "assigned_rate_limits": { 01:26:02.454 "rw_ios_per_sec": 0, 01:26:02.454 "rw_mbytes_per_sec": 0, 01:26:02.454 "r_mbytes_per_sec": 0, 01:26:02.454 "w_mbytes_per_sec": 0 01:26:02.454 }, 01:26:02.454 "claimed": false, 01:26:02.454 "zoned": false, 01:26:02.454 "supported_io_types": { 01:26:02.454 "read": true, 01:26:02.454 "write": true, 01:26:02.454 "unmap": true, 01:26:02.454 "flush": true, 01:26:02.454 "reset": true, 01:26:02.454 "nvme_admin": false, 01:26:02.454 "nvme_io": false, 01:26:02.454 "nvme_io_md": false, 01:26:02.454 "write_zeroes": true, 01:26:02.454 "zcopy": true, 01:26:02.454 "get_zone_info": false, 01:26:02.454 "zone_management": false, 01:26:02.454 "zone_append": false, 01:26:02.454 "compare": false, 01:26:02.454 "compare_and_write": false, 01:26:02.454 "abort": true, 01:26:02.454 "seek_hole": false, 01:26:02.454 "seek_data": false, 01:26:02.454 "copy": true, 01:26:02.454 "nvme_iov_md": false 01:26:02.454 }, 01:26:02.454 "memory_domains": [ 01:26:02.454 { 01:26:02.454 "dma_device_id": "system", 01:26:02.454 "dma_device_type": 1 01:26:02.454 }, 01:26:02.454 { 01:26:02.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:02.454 "dma_device_type": 2 01:26:02.454 } 01:26:02.454 ], 01:26:02.454 "driver_specific": {} 01:26:02.454 } 01:26:02.454 ] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 BaseBdev4 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.454 05:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.454 [ 01:26:02.454 { 01:26:02.454 "name": "BaseBdev4", 01:26:02.454 "aliases": [ 01:26:02.454 "f439a8cb-0495-43f7-833f-42baddb1e11b" 01:26:02.454 ], 01:26:02.454 "product_name": "Malloc disk", 01:26:02.454 "block_size": 512, 01:26:02.454 "num_blocks": 65536, 01:26:02.454 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:02.454 "assigned_rate_limits": { 01:26:02.454 "rw_ios_per_sec": 0, 01:26:02.454 "rw_mbytes_per_sec": 0, 01:26:02.454 "r_mbytes_per_sec": 0, 01:26:02.454 "w_mbytes_per_sec": 0 01:26:02.454 }, 01:26:02.454 "claimed": false, 01:26:02.454 "zoned": false, 01:26:02.454 "supported_io_types": { 01:26:02.454 "read": true, 01:26:02.454 "write": true, 01:26:02.454 "unmap": true, 01:26:02.454 "flush": true, 01:26:02.454 "reset": true, 01:26:02.454 "nvme_admin": false, 01:26:02.454 "nvme_io": false, 01:26:02.455 "nvme_io_md": false, 01:26:02.455 "write_zeroes": true, 01:26:02.455 "zcopy": true, 01:26:02.455 "get_zone_info": false, 01:26:02.455 "zone_management": false, 01:26:02.455 "zone_append": false, 01:26:02.455 "compare": false, 01:26:02.455 "compare_and_write": false, 01:26:02.455 "abort": true, 01:26:02.455 "seek_hole": false, 01:26:02.455 "seek_data": false, 01:26:02.455 "copy": true, 01:26:02.455 "nvme_iov_md": false 01:26:02.455 }, 01:26:02.455 "memory_domains": [ 01:26:02.455 { 01:26:02.455 "dma_device_id": "system", 01:26:02.455 "dma_device_type": 1 01:26:02.455 }, 01:26:02.455 { 01:26:02.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:02.455 "dma_device_type": 2 01:26:02.455 } 01:26:02.455 ], 01:26:02.455 "driver_specific": {} 01:26:02.455 } 01:26:02.455 ] 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.455 [2024-12-09 05:20:54.027188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:26:02.455 [2024-12-09 05:20:54.027256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:26:02.455 [2024-12-09 05:20:54.027286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:02.455 [2024-12-09 05:20:54.030059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:02.455 [2024-12-09 05:20:54.030124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.455 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.713 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:02.713 "name": "Existed_Raid", 01:26:02.713 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:02.713 "strip_size_kb": 0, 01:26:02.713 "state": "configuring", 01:26:02.713 "raid_level": "raid1", 01:26:02.713 "superblock": false, 01:26:02.713 "num_base_bdevs": 4, 01:26:02.713 "num_base_bdevs_discovered": 3, 01:26:02.713 "num_base_bdevs_operational": 4, 01:26:02.713 "base_bdevs_list": [ 01:26:02.713 { 01:26:02.713 "name": "BaseBdev1", 01:26:02.713 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:02.714 "is_configured": false, 01:26:02.714 "data_offset": 0, 01:26:02.714 "data_size": 0 01:26:02.714 }, 01:26:02.714 { 01:26:02.714 "name": "BaseBdev2", 01:26:02.714 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:02.714 "is_configured": true, 01:26:02.714 "data_offset": 0, 01:26:02.714 "data_size": 65536 01:26:02.714 }, 01:26:02.714 { 01:26:02.714 "name": "BaseBdev3", 01:26:02.714 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:02.714 "is_configured": true, 01:26:02.714 "data_offset": 0, 01:26:02.714 "data_size": 65536 01:26:02.714 }, 01:26:02.714 { 01:26:02.714 "name": "BaseBdev4", 01:26:02.714 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:02.714 "is_configured": true, 01:26:02.714 "data_offset": 0, 01:26:02.714 "data_size": 65536 01:26:02.714 } 01:26:02.714 ] 01:26:02.714 }' 01:26:02.714 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:02.714 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:02.972 [2024-12-09 05:20:54.559340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:02.972 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.231 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:03.231 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:03.231 "name": "Existed_Raid", 01:26:03.231 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:03.231 "strip_size_kb": 0, 01:26:03.231 "state": "configuring", 01:26:03.231 "raid_level": "raid1", 01:26:03.231 "superblock": false, 01:26:03.231 "num_base_bdevs": 4, 01:26:03.231 "num_base_bdevs_discovered": 2, 01:26:03.231 "num_base_bdevs_operational": 4, 01:26:03.231 "base_bdevs_list": [ 01:26:03.231 { 01:26:03.231 "name": "BaseBdev1", 01:26:03.231 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:03.231 "is_configured": false, 01:26:03.231 "data_offset": 0, 01:26:03.231 "data_size": 0 01:26:03.231 }, 01:26:03.231 { 01:26:03.231 "name": null, 01:26:03.231 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:03.231 "is_configured": false, 01:26:03.231 "data_offset": 0, 01:26:03.231 "data_size": 65536 01:26:03.231 }, 01:26:03.231 { 01:26:03.231 "name": "BaseBdev3", 01:26:03.231 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:03.231 "is_configured": true, 01:26:03.231 "data_offset": 0, 01:26:03.231 "data_size": 65536 01:26:03.231 }, 01:26:03.231 { 01:26:03.231 "name": "BaseBdev4", 01:26:03.231 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:03.231 "is_configured": true, 01:26:03.231 "data_offset": 0, 01:26:03.231 "data_size": 65536 01:26:03.231 } 01:26:03.231 ] 01:26:03.231 }' 01:26:03.231 05:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:03.231 05:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.489 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:03.489 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:26:03.489 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:03.489 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.747 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:03.747 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:26:03.747 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:26:03.747 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:03.747 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.747 [2024-12-09 05:20:55.183895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:03.747 BaseBdev1 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.748 [ 01:26:03.748 { 01:26:03.748 "name": "BaseBdev1", 01:26:03.748 "aliases": [ 01:26:03.748 "552eb0b9-3071-4e1a-b180-8b4bfca389aa" 01:26:03.748 ], 01:26:03.748 "product_name": "Malloc disk", 01:26:03.748 "block_size": 512, 01:26:03.748 "num_blocks": 65536, 01:26:03.748 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:03.748 "assigned_rate_limits": { 01:26:03.748 "rw_ios_per_sec": 0, 01:26:03.748 "rw_mbytes_per_sec": 0, 01:26:03.748 "r_mbytes_per_sec": 0, 01:26:03.748 "w_mbytes_per_sec": 0 01:26:03.748 }, 01:26:03.748 "claimed": true, 01:26:03.748 "claim_type": "exclusive_write", 01:26:03.748 "zoned": false, 01:26:03.748 "supported_io_types": { 01:26:03.748 "read": true, 01:26:03.748 "write": true, 01:26:03.748 "unmap": true, 01:26:03.748 "flush": true, 01:26:03.748 "reset": true, 01:26:03.748 "nvme_admin": false, 01:26:03.748 "nvme_io": false, 01:26:03.748 "nvme_io_md": false, 01:26:03.748 "write_zeroes": true, 01:26:03.748 "zcopy": true, 01:26:03.748 "get_zone_info": false, 01:26:03.748 "zone_management": false, 01:26:03.748 "zone_append": false, 01:26:03.748 "compare": false, 01:26:03.748 "compare_and_write": false, 01:26:03.748 "abort": true, 01:26:03.748 "seek_hole": false, 01:26:03.748 "seek_data": false, 01:26:03.748 "copy": true, 01:26:03.748 "nvme_iov_md": false 01:26:03.748 }, 01:26:03.748 "memory_domains": [ 01:26:03.748 { 01:26:03.748 "dma_device_id": "system", 01:26:03.748 "dma_device_type": 1 01:26:03.748 }, 01:26:03.748 { 01:26:03.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:03.748 "dma_device_type": 2 01:26:03.748 } 01:26:03.748 ], 01:26:03.748 "driver_specific": {} 01:26:03.748 } 01:26:03.748 ] 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:03.748 "name": "Existed_Raid", 01:26:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:03.748 "strip_size_kb": 0, 01:26:03.748 "state": "configuring", 01:26:03.748 "raid_level": "raid1", 01:26:03.748 "superblock": false, 01:26:03.748 "num_base_bdevs": 4, 01:26:03.748 "num_base_bdevs_discovered": 3, 01:26:03.748 "num_base_bdevs_operational": 4, 01:26:03.748 "base_bdevs_list": [ 01:26:03.748 { 01:26:03.748 "name": "BaseBdev1", 01:26:03.748 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:03.748 "is_configured": true, 01:26:03.748 "data_offset": 0, 01:26:03.748 "data_size": 65536 01:26:03.748 }, 01:26:03.748 { 01:26:03.748 "name": null, 01:26:03.748 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:03.748 "is_configured": false, 01:26:03.748 "data_offset": 0, 01:26:03.748 "data_size": 65536 01:26:03.748 }, 01:26:03.748 { 01:26:03.748 "name": "BaseBdev3", 01:26:03.748 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:03.748 "is_configured": true, 01:26:03.748 "data_offset": 0, 01:26:03.748 "data_size": 65536 01:26:03.748 }, 01:26:03.748 { 01:26:03.748 "name": "BaseBdev4", 01:26:03.748 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:03.748 "is_configured": true, 01:26:03.748 "data_offset": 0, 01:26:03.748 "data_size": 65536 01:26:03.748 } 01:26:03.748 ] 01:26:03.748 }' 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:03.748 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.316 [2024-12-09 05:20:55.788189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:04.316 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:04.317 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.317 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:04.317 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:04.317 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:04.317 "name": "Existed_Raid", 01:26:04.317 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:04.317 "strip_size_kb": 0, 01:26:04.317 "state": "configuring", 01:26:04.317 "raid_level": "raid1", 01:26:04.317 "superblock": false, 01:26:04.317 "num_base_bdevs": 4, 01:26:04.317 "num_base_bdevs_discovered": 2, 01:26:04.317 "num_base_bdevs_operational": 4, 01:26:04.317 "base_bdevs_list": [ 01:26:04.317 { 01:26:04.317 "name": "BaseBdev1", 01:26:04.317 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:04.317 "is_configured": true, 01:26:04.317 "data_offset": 0, 01:26:04.317 "data_size": 65536 01:26:04.317 }, 01:26:04.317 { 01:26:04.317 "name": null, 01:26:04.317 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:04.317 "is_configured": false, 01:26:04.317 "data_offset": 0, 01:26:04.317 "data_size": 65536 01:26:04.317 }, 01:26:04.317 { 01:26:04.317 "name": null, 01:26:04.317 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:04.317 "is_configured": false, 01:26:04.317 "data_offset": 0, 01:26:04.317 "data_size": 65536 01:26:04.317 }, 01:26:04.317 { 01:26:04.317 "name": "BaseBdev4", 01:26:04.317 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:04.317 "is_configured": true, 01:26:04.317 "data_offset": 0, 01:26:04.317 "data_size": 65536 01:26:04.317 } 01:26:04.317 ] 01:26:04.317 }' 01:26:04.317 05:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:04.317 05:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.884 [2024-12-09 05:20:56.384328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:04.884 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:04.885 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:04.885 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:04.885 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:04.885 "name": "Existed_Raid", 01:26:04.885 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:04.885 "strip_size_kb": 0, 01:26:04.885 "state": "configuring", 01:26:04.885 "raid_level": "raid1", 01:26:04.885 "superblock": false, 01:26:04.885 "num_base_bdevs": 4, 01:26:04.885 "num_base_bdevs_discovered": 3, 01:26:04.885 "num_base_bdevs_operational": 4, 01:26:04.885 "base_bdevs_list": [ 01:26:04.885 { 01:26:04.885 "name": "BaseBdev1", 01:26:04.885 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:04.885 "is_configured": true, 01:26:04.885 "data_offset": 0, 01:26:04.885 "data_size": 65536 01:26:04.885 }, 01:26:04.885 { 01:26:04.885 "name": null, 01:26:04.885 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:04.885 "is_configured": false, 01:26:04.885 "data_offset": 0, 01:26:04.885 "data_size": 65536 01:26:04.885 }, 01:26:04.885 { 01:26:04.885 "name": "BaseBdev3", 01:26:04.885 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:04.885 "is_configured": true, 01:26:04.885 "data_offset": 0, 01:26:04.885 "data_size": 65536 01:26:04.885 }, 01:26:04.885 { 01:26:04.885 "name": "BaseBdev4", 01:26:04.885 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:04.885 "is_configured": true, 01:26:04.885 "data_offset": 0, 01:26:04.885 "data_size": 65536 01:26:04.885 } 01:26:04.885 ] 01:26:04.885 }' 01:26:04.885 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:04.885 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:05.452 05:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:05.452 [2024-12-09 05:20:56.972620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:05.452 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:05.725 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:05.725 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:05.725 "name": "Existed_Raid", 01:26:05.725 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:05.725 "strip_size_kb": 0, 01:26:05.725 "state": "configuring", 01:26:05.725 "raid_level": "raid1", 01:26:05.725 "superblock": false, 01:26:05.725 "num_base_bdevs": 4, 01:26:05.725 "num_base_bdevs_discovered": 2, 01:26:05.725 "num_base_bdevs_operational": 4, 01:26:05.725 "base_bdevs_list": [ 01:26:05.725 { 01:26:05.726 "name": null, 01:26:05.726 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:05.726 "is_configured": false, 01:26:05.726 "data_offset": 0, 01:26:05.726 "data_size": 65536 01:26:05.726 }, 01:26:05.726 { 01:26:05.726 "name": null, 01:26:05.726 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:05.726 "is_configured": false, 01:26:05.726 "data_offset": 0, 01:26:05.726 "data_size": 65536 01:26:05.726 }, 01:26:05.726 { 01:26:05.726 "name": "BaseBdev3", 01:26:05.726 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:05.726 "is_configured": true, 01:26:05.726 "data_offset": 0, 01:26:05.726 "data_size": 65536 01:26:05.726 }, 01:26:05.726 { 01:26:05.726 "name": "BaseBdev4", 01:26:05.726 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:05.726 "is_configured": true, 01:26:05.726 "data_offset": 0, 01:26:05.726 "data_size": 65536 01:26:05.726 } 01:26:05.726 ] 01:26:05.726 }' 01:26:05.726 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:05.726 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.018 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.018 [2024-12-09 05:20:57.630087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:06.276 "name": "Existed_Raid", 01:26:06.276 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:06.276 "strip_size_kb": 0, 01:26:06.276 "state": "configuring", 01:26:06.276 "raid_level": "raid1", 01:26:06.276 "superblock": false, 01:26:06.276 "num_base_bdevs": 4, 01:26:06.276 "num_base_bdevs_discovered": 3, 01:26:06.276 "num_base_bdevs_operational": 4, 01:26:06.276 "base_bdevs_list": [ 01:26:06.276 { 01:26:06.276 "name": null, 01:26:06.276 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:06.276 "is_configured": false, 01:26:06.276 "data_offset": 0, 01:26:06.276 "data_size": 65536 01:26:06.276 }, 01:26:06.276 { 01:26:06.276 "name": "BaseBdev2", 01:26:06.276 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:06.276 "is_configured": true, 01:26:06.276 "data_offset": 0, 01:26:06.276 "data_size": 65536 01:26:06.276 }, 01:26:06.276 { 01:26:06.276 "name": "BaseBdev3", 01:26:06.276 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:06.276 "is_configured": true, 01:26:06.276 "data_offset": 0, 01:26:06.276 "data_size": 65536 01:26:06.276 }, 01:26:06.276 { 01:26:06.276 "name": "BaseBdev4", 01:26:06.276 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:06.276 "is_configured": true, 01:26:06.276 "data_offset": 0, 01:26:06.276 "data_size": 65536 01:26:06.276 } 01:26:06.276 ] 01:26:06.276 }' 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:06.276 05:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.534 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:06.534 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.534 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.534 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:26:06.534 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 552eb0b9-3071-4e1a-b180-8b4bfca389aa 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.793 [2024-12-09 05:20:58.265137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:26:06.793 [2024-12-09 05:20:58.265203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:26:06.793 [2024-12-09 05:20:58.265218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:26:06.793 [2024-12-09 05:20:58.265657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:26:06.793 [2024-12-09 05:20:58.265926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:26:06.793 [2024-12-09 05:20:58.265943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:26:06.793 [2024-12-09 05:20:58.266298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:06.793 NewBaseBdev 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.793 [ 01:26:06.793 { 01:26:06.793 "name": "NewBaseBdev", 01:26:06.793 "aliases": [ 01:26:06.793 "552eb0b9-3071-4e1a-b180-8b4bfca389aa" 01:26:06.793 ], 01:26:06.793 "product_name": "Malloc disk", 01:26:06.793 "block_size": 512, 01:26:06.793 "num_blocks": 65536, 01:26:06.793 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:06.793 "assigned_rate_limits": { 01:26:06.793 "rw_ios_per_sec": 0, 01:26:06.793 "rw_mbytes_per_sec": 0, 01:26:06.793 "r_mbytes_per_sec": 0, 01:26:06.793 "w_mbytes_per_sec": 0 01:26:06.793 }, 01:26:06.793 "claimed": true, 01:26:06.793 "claim_type": "exclusive_write", 01:26:06.793 "zoned": false, 01:26:06.793 "supported_io_types": { 01:26:06.793 "read": true, 01:26:06.793 "write": true, 01:26:06.793 "unmap": true, 01:26:06.793 "flush": true, 01:26:06.793 "reset": true, 01:26:06.793 "nvme_admin": false, 01:26:06.793 "nvme_io": false, 01:26:06.793 "nvme_io_md": false, 01:26:06.793 "write_zeroes": true, 01:26:06.793 "zcopy": true, 01:26:06.793 "get_zone_info": false, 01:26:06.793 "zone_management": false, 01:26:06.793 "zone_append": false, 01:26:06.793 "compare": false, 01:26:06.793 "compare_and_write": false, 01:26:06.793 "abort": true, 01:26:06.793 "seek_hole": false, 01:26:06.793 "seek_data": false, 01:26:06.793 "copy": true, 01:26:06.793 "nvme_iov_md": false 01:26:06.793 }, 01:26:06.793 "memory_domains": [ 01:26:06.793 { 01:26:06.793 "dma_device_id": "system", 01:26:06.793 "dma_device_type": 1 01:26:06.793 }, 01:26:06.793 { 01:26:06.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:06.793 "dma_device_type": 2 01:26:06.793 } 01:26:06.793 ], 01:26:06.793 "driver_specific": {} 01:26:06.793 } 01:26:06.793 ] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:06.793 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:06.794 "name": "Existed_Raid", 01:26:06.794 "uuid": "c4e2da11-d9e7-4466-ad0e-953b72112946", 01:26:06.794 "strip_size_kb": 0, 01:26:06.794 "state": "online", 01:26:06.794 "raid_level": "raid1", 01:26:06.794 "superblock": false, 01:26:06.794 "num_base_bdevs": 4, 01:26:06.794 "num_base_bdevs_discovered": 4, 01:26:06.794 "num_base_bdevs_operational": 4, 01:26:06.794 "base_bdevs_list": [ 01:26:06.794 { 01:26:06.794 "name": "NewBaseBdev", 01:26:06.794 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:06.794 "is_configured": true, 01:26:06.794 "data_offset": 0, 01:26:06.794 "data_size": 65536 01:26:06.794 }, 01:26:06.794 { 01:26:06.794 "name": "BaseBdev2", 01:26:06.794 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:06.794 "is_configured": true, 01:26:06.794 "data_offset": 0, 01:26:06.794 "data_size": 65536 01:26:06.794 }, 01:26:06.794 { 01:26:06.794 "name": "BaseBdev3", 01:26:06.794 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:06.794 "is_configured": true, 01:26:06.794 "data_offset": 0, 01:26:06.794 "data_size": 65536 01:26:06.794 }, 01:26:06.794 { 01:26:06.794 "name": "BaseBdev4", 01:26:06.794 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:06.794 "is_configured": true, 01:26:06.794 "data_offset": 0, 01:26:06.794 "data_size": 65536 01:26:06.794 } 01:26:06.794 ] 01:26:06.794 }' 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:06.794 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.359 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:26:07.359 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.360 [2024-12-09 05:20:58.821800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:26:07.360 "name": "Existed_Raid", 01:26:07.360 "aliases": [ 01:26:07.360 "c4e2da11-d9e7-4466-ad0e-953b72112946" 01:26:07.360 ], 01:26:07.360 "product_name": "Raid Volume", 01:26:07.360 "block_size": 512, 01:26:07.360 "num_blocks": 65536, 01:26:07.360 "uuid": "c4e2da11-d9e7-4466-ad0e-953b72112946", 01:26:07.360 "assigned_rate_limits": { 01:26:07.360 "rw_ios_per_sec": 0, 01:26:07.360 "rw_mbytes_per_sec": 0, 01:26:07.360 "r_mbytes_per_sec": 0, 01:26:07.360 "w_mbytes_per_sec": 0 01:26:07.360 }, 01:26:07.360 "claimed": false, 01:26:07.360 "zoned": false, 01:26:07.360 "supported_io_types": { 01:26:07.360 "read": true, 01:26:07.360 "write": true, 01:26:07.360 "unmap": false, 01:26:07.360 "flush": false, 01:26:07.360 "reset": true, 01:26:07.360 "nvme_admin": false, 01:26:07.360 "nvme_io": false, 01:26:07.360 "nvme_io_md": false, 01:26:07.360 "write_zeroes": true, 01:26:07.360 "zcopy": false, 01:26:07.360 "get_zone_info": false, 01:26:07.360 "zone_management": false, 01:26:07.360 "zone_append": false, 01:26:07.360 "compare": false, 01:26:07.360 "compare_and_write": false, 01:26:07.360 "abort": false, 01:26:07.360 "seek_hole": false, 01:26:07.360 "seek_data": false, 01:26:07.360 "copy": false, 01:26:07.360 "nvme_iov_md": false 01:26:07.360 }, 01:26:07.360 "memory_domains": [ 01:26:07.360 { 01:26:07.360 "dma_device_id": "system", 01:26:07.360 "dma_device_type": 1 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:07.360 "dma_device_type": 2 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "system", 01:26:07.360 "dma_device_type": 1 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:07.360 "dma_device_type": 2 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "system", 01:26:07.360 "dma_device_type": 1 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:07.360 "dma_device_type": 2 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "system", 01:26:07.360 "dma_device_type": 1 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:07.360 "dma_device_type": 2 01:26:07.360 } 01:26:07.360 ], 01:26:07.360 "driver_specific": { 01:26:07.360 "raid": { 01:26:07.360 "uuid": "c4e2da11-d9e7-4466-ad0e-953b72112946", 01:26:07.360 "strip_size_kb": 0, 01:26:07.360 "state": "online", 01:26:07.360 "raid_level": "raid1", 01:26:07.360 "superblock": false, 01:26:07.360 "num_base_bdevs": 4, 01:26:07.360 "num_base_bdevs_discovered": 4, 01:26:07.360 "num_base_bdevs_operational": 4, 01:26:07.360 "base_bdevs_list": [ 01:26:07.360 { 01:26:07.360 "name": "NewBaseBdev", 01:26:07.360 "uuid": "552eb0b9-3071-4e1a-b180-8b4bfca389aa", 01:26:07.360 "is_configured": true, 01:26:07.360 "data_offset": 0, 01:26:07.360 "data_size": 65536 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "name": "BaseBdev2", 01:26:07.360 "uuid": "e4204166-5c3c-4bc7-b4c2-98c482c475c6", 01:26:07.360 "is_configured": true, 01:26:07.360 "data_offset": 0, 01:26:07.360 "data_size": 65536 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "name": "BaseBdev3", 01:26:07.360 "uuid": "5c5bf540-fe37-41f6-b6b5-0ddd07db57c6", 01:26:07.360 "is_configured": true, 01:26:07.360 "data_offset": 0, 01:26:07.360 "data_size": 65536 01:26:07.360 }, 01:26:07.360 { 01:26:07.360 "name": "BaseBdev4", 01:26:07.360 "uuid": "f439a8cb-0495-43f7-833f-42baddb1e11b", 01:26:07.360 "is_configured": true, 01:26:07.360 "data_offset": 0, 01:26:07.360 "data_size": 65536 01:26:07.360 } 01:26:07.360 ] 01:26:07.360 } 01:26:07.360 } 01:26:07.360 }' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:26:07.360 BaseBdev2 01:26:07.360 BaseBdev3 01:26:07.360 BaseBdev4' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:07.360 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.619 05:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:07.619 [2024-12-09 05:20:59.157353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:26:07.619 [2024-12-09 05:20:59.157417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:07.619 [2024-12-09 05:20:59.157573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:07.619 [2024-12-09 05:20:59.157976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:07.619 [2024-12-09 05:20:59.158010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73254 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73254 ']' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73254 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73254 01:26:07.619 killing process with pid 73254 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73254' 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73254 01:26:07.619 [2024-12-09 05:20:59.195080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:26:07.619 05:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73254 01:26:07.878 [2024-12-09 05:20:59.483506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:26:09.332 01:26:09.332 real 0m12.819s 01:26:09.332 user 0m21.032s 01:26:09.332 sys 0m1.983s 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:26:09.332 ************************************ 01:26:09.332 END TEST raid_state_function_test 01:26:09.332 ************************************ 01:26:09.332 05:21:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 01:26:09.332 05:21:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:26:09.332 05:21:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:09.332 05:21:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:26:09.332 ************************************ 01:26:09.332 START TEST raid_state_function_test_sb 01:26:09.332 ************************************ 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:26:09.332 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:26:09.333 Process raid pid: 73937 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73937 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73937' 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73937 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73937 ']' 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:09.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:09.333 05:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:09.333 [2024-12-09 05:21:00.741594] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:09.333 [2024-12-09 05:21:00.741794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:09.333 [2024-12-09 05:21:00.932004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:09.592 [2024-12-09 05:21:01.096682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:09.851 [2024-12-09 05:21:01.309934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:09.851 [2024-12-09 05:21:01.309993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.419 [2024-12-09 05:21:01.760835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:26:10.419 [2024-12-09 05:21:01.760916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:26:10.419 [2024-12-09 05:21:01.760936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:26:10.419 [2024-12-09 05:21:01.760953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:26:10.419 [2024-12-09 05:21:01.760964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:26:10.419 [2024-12-09 05:21:01.760979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:26:10.419 [2024-12-09 05:21:01.760989] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:26:10.419 [2024-12-09 05:21:01.761004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:10.419 "name": "Existed_Raid", 01:26:10.419 "uuid": "71f2d345-c1eb-41f6-b23a-7b797259df4c", 01:26:10.419 "strip_size_kb": 0, 01:26:10.419 "state": "configuring", 01:26:10.419 "raid_level": "raid1", 01:26:10.419 "superblock": true, 01:26:10.419 "num_base_bdevs": 4, 01:26:10.419 "num_base_bdevs_discovered": 0, 01:26:10.419 "num_base_bdevs_operational": 4, 01:26:10.419 "base_bdevs_list": [ 01:26:10.419 { 01:26:10.419 "name": "BaseBdev1", 01:26:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.419 "is_configured": false, 01:26:10.419 "data_offset": 0, 01:26:10.419 "data_size": 0 01:26:10.419 }, 01:26:10.419 { 01:26:10.419 "name": "BaseBdev2", 01:26:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.419 "is_configured": false, 01:26:10.419 "data_offset": 0, 01:26:10.419 "data_size": 0 01:26:10.419 }, 01:26:10.419 { 01:26:10.419 "name": "BaseBdev3", 01:26:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.419 "is_configured": false, 01:26:10.419 "data_offset": 0, 01:26:10.419 "data_size": 0 01:26:10.419 }, 01:26:10.419 { 01:26:10.419 "name": "BaseBdev4", 01:26:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.419 "is_configured": false, 01:26:10.419 "data_offset": 0, 01:26:10.419 "data_size": 0 01:26:10.419 } 01:26:10.419 ] 01:26:10.419 }' 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:10.419 05:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.987 [2024-12-09 05:21:02.300911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:26:10.987 [2024-12-09 05:21:02.300958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.987 [2024-12-09 05:21:02.312886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:26:10.987 [2024-12-09 05:21:02.313080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:26:10.987 [2024-12-09 05:21:02.313107] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:26:10.987 [2024-12-09 05:21:02.313124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:26:10.987 [2024-12-09 05:21:02.313134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:26:10.987 [2024-12-09 05:21:02.313149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:26:10.987 [2024-12-09 05:21:02.313159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:26:10.987 [2024-12-09 05:21:02.313173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.987 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.988 [2024-12-09 05:21:02.358709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:10.988 BaseBdev1 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.988 [ 01:26:10.988 { 01:26:10.988 "name": "BaseBdev1", 01:26:10.988 "aliases": [ 01:26:10.988 "bd6666cd-44b5-4be0-9891-4aea68ab5edd" 01:26:10.988 ], 01:26:10.988 "product_name": "Malloc disk", 01:26:10.988 "block_size": 512, 01:26:10.988 "num_blocks": 65536, 01:26:10.988 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:10.988 "assigned_rate_limits": { 01:26:10.988 "rw_ios_per_sec": 0, 01:26:10.988 "rw_mbytes_per_sec": 0, 01:26:10.988 "r_mbytes_per_sec": 0, 01:26:10.988 "w_mbytes_per_sec": 0 01:26:10.988 }, 01:26:10.988 "claimed": true, 01:26:10.988 "claim_type": "exclusive_write", 01:26:10.988 "zoned": false, 01:26:10.988 "supported_io_types": { 01:26:10.988 "read": true, 01:26:10.988 "write": true, 01:26:10.988 "unmap": true, 01:26:10.988 "flush": true, 01:26:10.988 "reset": true, 01:26:10.988 "nvme_admin": false, 01:26:10.988 "nvme_io": false, 01:26:10.988 "nvme_io_md": false, 01:26:10.988 "write_zeroes": true, 01:26:10.988 "zcopy": true, 01:26:10.988 "get_zone_info": false, 01:26:10.988 "zone_management": false, 01:26:10.988 "zone_append": false, 01:26:10.988 "compare": false, 01:26:10.988 "compare_and_write": false, 01:26:10.988 "abort": true, 01:26:10.988 "seek_hole": false, 01:26:10.988 "seek_data": false, 01:26:10.988 "copy": true, 01:26:10.988 "nvme_iov_md": false 01:26:10.988 }, 01:26:10.988 "memory_domains": [ 01:26:10.988 { 01:26:10.988 "dma_device_id": "system", 01:26:10.988 "dma_device_type": 1 01:26:10.988 }, 01:26:10.988 { 01:26:10.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:10.988 "dma_device_type": 2 01:26:10.988 } 01:26:10.988 ], 01:26:10.988 "driver_specific": {} 01:26:10.988 } 01:26:10.988 ] 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:10.988 "name": "Existed_Raid", 01:26:10.988 "uuid": "74fc20c7-b9bf-45dd-9d49-8c91ad2f4390", 01:26:10.988 "strip_size_kb": 0, 01:26:10.988 "state": "configuring", 01:26:10.988 "raid_level": "raid1", 01:26:10.988 "superblock": true, 01:26:10.988 "num_base_bdevs": 4, 01:26:10.988 "num_base_bdevs_discovered": 1, 01:26:10.988 "num_base_bdevs_operational": 4, 01:26:10.988 "base_bdevs_list": [ 01:26:10.988 { 01:26:10.988 "name": "BaseBdev1", 01:26:10.988 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:10.988 "is_configured": true, 01:26:10.988 "data_offset": 2048, 01:26:10.988 "data_size": 63488 01:26:10.988 }, 01:26:10.988 { 01:26:10.988 "name": "BaseBdev2", 01:26:10.988 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.988 "is_configured": false, 01:26:10.988 "data_offset": 0, 01:26:10.988 "data_size": 0 01:26:10.988 }, 01:26:10.988 { 01:26:10.988 "name": "BaseBdev3", 01:26:10.988 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.988 "is_configured": false, 01:26:10.988 "data_offset": 0, 01:26:10.988 "data_size": 0 01:26:10.988 }, 01:26:10.988 { 01:26:10.988 "name": "BaseBdev4", 01:26:10.988 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:10.988 "is_configured": false, 01:26:10.988 "data_offset": 0, 01:26:10.988 "data_size": 0 01:26:10.988 } 01:26:10.988 ] 01:26:10.988 }' 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:10.988 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:11.556 [2024-12-09 05:21:02.927004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:26:11.556 [2024-12-09 05:21:02.927077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:11.556 [2024-12-09 05:21:02.935051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:11.556 [2024-12-09 05:21:02.937941] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:26:11.556 [2024-12-09 05:21:02.938023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:26:11.556 [2024-12-09 05:21:02.938048] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:26:11.556 [2024-12-09 05:21:02.938064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:26:11.556 [2024-12-09 05:21:02.938073] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:26:11.556 [2024-12-09 05:21:02.938086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:26:11.556 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:11.557 "name": "Existed_Raid", 01:26:11.557 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:11.557 "strip_size_kb": 0, 01:26:11.557 "state": "configuring", 01:26:11.557 "raid_level": "raid1", 01:26:11.557 "superblock": true, 01:26:11.557 "num_base_bdevs": 4, 01:26:11.557 "num_base_bdevs_discovered": 1, 01:26:11.557 "num_base_bdevs_operational": 4, 01:26:11.557 "base_bdevs_list": [ 01:26:11.557 { 01:26:11.557 "name": "BaseBdev1", 01:26:11.557 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:11.557 "is_configured": true, 01:26:11.557 "data_offset": 2048, 01:26:11.557 "data_size": 63488 01:26:11.557 }, 01:26:11.557 { 01:26:11.557 "name": "BaseBdev2", 01:26:11.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:11.557 "is_configured": false, 01:26:11.557 "data_offset": 0, 01:26:11.557 "data_size": 0 01:26:11.557 }, 01:26:11.557 { 01:26:11.557 "name": "BaseBdev3", 01:26:11.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:11.557 "is_configured": false, 01:26:11.557 "data_offset": 0, 01:26:11.557 "data_size": 0 01:26:11.557 }, 01:26:11.557 { 01:26:11.557 "name": "BaseBdev4", 01:26:11.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:11.557 "is_configured": false, 01:26:11.557 "data_offset": 0, 01:26:11.557 "data_size": 0 01:26:11.557 } 01:26:11.557 ] 01:26:11.557 }' 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:11.557 05:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.124 [2024-12-09 05:21:03.481834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:12.124 BaseBdev2 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.124 [ 01:26:12.124 { 01:26:12.124 "name": "BaseBdev2", 01:26:12.124 "aliases": [ 01:26:12.124 "d9244faf-4671-442d-aa41-a8c48b030773" 01:26:12.124 ], 01:26:12.124 "product_name": "Malloc disk", 01:26:12.124 "block_size": 512, 01:26:12.124 "num_blocks": 65536, 01:26:12.124 "uuid": "d9244faf-4671-442d-aa41-a8c48b030773", 01:26:12.124 "assigned_rate_limits": { 01:26:12.124 "rw_ios_per_sec": 0, 01:26:12.124 "rw_mbytes_per_sec": 0, 01:26:12.124 "r_mbytes_per_sec": 0, 01:26:12.124 "w_mbytes_per_sec": 0 01:26:12.124 }, 01:26:12.124 "claimed": true, 01:26:12.124 "claim_type": "exclusive_write", 01:26:12.124 "zoned": false, 01:26:12.124 "supported_io_types": { 01:26:12.124 "read": true, 01:26:12.124 "write": true, 01:26:12.124 "unmap": true, 01:26:12.124 "flush": true, 01:26:12.124 "reset": true, 01:26:12.124 "nvme_admin": false, 01:26:12.124 "nvme_io": false, 01:26:12.124 "nvme_io_md": false, 01:26:12.124 "write_zeroes": true, 01:26:12.124 "zcopy": true, 01:26:12.124 "get_zone_info": false, 01:26:12.124 "zone_management": false, 01:26:12.124 "zone_append": false, 01:26:12.124 "compare": false, 01:26:12.124 "compare_and_write": false, 01:26:12.124 "abort": true, 01:26:12.124 "seek_hole": false, 01:26:12.124 "seek_data": false, 01:26:12.124 "copy": true, 01:26:12.124 "nvme_iov_md": false 01:26:12.124 }, 01:26:12.124 "memory_domains": [ 01:26:12.124 { 01:26:12.124 "dma_device_id": "system", 01:26:12.124 "dma_device_type": 1 01:26:12.124 }, 01:26:12.124 { 01:26:12.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:12.124 "dma_device_type": 2 01:26:12.124 } 01:26:12.124 ], 01:26:12.124 "driver_specific": {} 01:26:12.124 } 01:26:12.124 ] 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:12.124 "name": "Existed_Raid", 01:26:12.124 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:12.124 "strip_size_kb": 0, 01:26:12.124 "state": "configuring", 01:26:12.124 "raid_level": "raid1", 01:26:12.124 "superblock": true, 01:26:12.124 "num_base_bdevs": 4, 01:26:12.124 "num_base_bdevs_discovered": 2, 01:26:12.124 "num_base_bdevs_operational": 4, 01:26:12.124 "base_bdevs_list": [ 01:26:12.124 { 01:26:12.124 "name": "BaseBdev1", 01:26:12.124 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:12.124 "is_configured": true, 01:26:12.124 "data_offset": 2048, 01:26:12.124 "data_size": 63488 01:26:12.124 }, 01:26:12.124 { 01:26:12.124 "name": "BaseBdev2", 01:26:12.124 "uuid": "d9244faf-4671-442d-aa41-a8c48b030773", 01:26:12.124 "is_configured": true, 01:26:12.124 "data_offset": 2048, 01:26:12.124 "data_size": 63488 01:26:12.124 }, 01:26:12.124 { 01:26:12.124 "name": "BaseBdev3", 01:26:12.124 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:12.124 "is_configured": false, 01:26:12.124 "data_offset": 0, 01:26:12.124 "data_size": 0 01:26:12.124 }, 01:26:12.124 { 01:26:12.124 "name": "BaseBdev4", 01:26:12.124 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:12.124 "is_configured": false, 01:26:12.124 "data_offset": 0, 01:26:12.124 "data_size": 0 01:26:12.124 } 01:26:12.124 ] 01:26:12.124 }' 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:12.124 05:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.692 [2024-12-09 05:21:04.094974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:12.692 BaseBdev3 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.692 [ 01:26:12.692 { 01:26:12.692 "name": "BaseBdev3", 01:26:12.692 "aliases": [ 01:26:12.692 "8e65a08e-ab7c-464f-b650-a7c4d5e9130b" 01:26:12.692 ], 01:26:12.692 "product_name": "Malloc disk", 01:26:12.692 "block_size": 512, 01:26:12.692 "num_blocks": 65536, 01:26:12.692 "uuid": "8e65a08e-ab7c-464f-b650-a7c4d5e9130b", 01:26:12.692 "assigned_rate_limits": { 01:26:12.692 "rw_ios_per_sec": 0, 01:26:12.692 "rw_mbytes_per_sec": 0, 01:26:12.692 "r_mbytes_per_sec": 0, 01:26:12.692 "w_mbytes_per_sec": 0 01:26:12.692 }, 01:26:12.692 "claimed": true, 01:26:12.692 "claim_type": "exclusive_write", 01:26:12.692 "zoned": false, 01:26:12.692 "supported_io_types": { 01:26:12.692 "read": true, 01:26:12.692 "write": true, 01:26:12.692 "unmap": true, 01:26:12.692 "flush": true, 01:26:12.692 "reset": true, 01:26:12.692 "nvme_admin": false, 01:26:12.692 "nvme_io": false, 01:26:12.692 "nvme_io_md": false, 01:26:12.692 "write_zeroes": true, 01:26:12.692 "zcopy": true, 01:26:12.692 "get_zone_info": false, 01:26:12.692 "zone_management": false, 01:26:12.692 "zone_append": false, 01:26:12.692 "compare": false, 01:26:12.692 "compare_and_write": false, 01:26:12.692 "abort": true, 01:26:12.692 "seek_hole": false, 01:26:12.692 "seek_data": false, 01:26:12.692 "copy": true, 01:26:12.692 "nvme_iov_md": false 01:26:12.692 }, 01:26:12.692 "memory_domains": [ 01:26:12.692 { 01:26:12.692 "dma_device_id": "system", 01:26:12.692 "dma_device_type": 1 01:26:12.692 }, 01:26:12.692 { 01:26:12.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:12.692 "dma_device_type": 2 01:26:12.692 } 01:26:12.692 ], 01:26:12.692 "driver_specific": {} 01:26:12.692 } 01:26:12.692 ] 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.692 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:12.692 "name": "Existed_Raid", 01:26:12.692 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:12.692 "strip_size_kb": 0, 01:26:12.692 "state": "configuring", 01:26:12.692 "raid_level": "raid1", 01:26:12.692 "superblock": true, 01:26:12.692 "num_base_bdevs": 4, 01:26:12.692 "num_base_bdevs_discovered": 3, 01:26:12.692 "num_base_bdevs_operational": 4, 01:26:12.692 "base_bdevs_list": [ 01:26:12.692 { 01:26:12.692 "name": "BaseBdev1", 01:26:12.692 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:12.692 "is_configured": true, 01:26:12.692 "data_offset": 2048, 01:26:12.692 "data_size": 63488 01:26:12.692 }, 01:26:12.692 { 01:26:12.692 "name": "BaseBdev2", 01:26:12.692 "uuid": "d9244faf-4671-442d-aa41-a8c48b030773", 01:26:12.692 "is_configured": true, 01:26:12.692 "data_offset": 2048, 01:26:12.692 "data_size": 63488 01:26:12.692 }, 01:26:12.692 { 01:26:12.692 "name": "BaseBdev3", 01:26:12.692 "uuid": "8e65a08e-ab7c-464f-b650-a7c4d5e9130b", 01:26:12.693 "is_configured": true, 01:26:12.693 "data_offset": 2048, 01:26:12.693 "data_size": 63488 01:26:12.693 }, 01:26:12.693 { 01:26:12.693 "name": "BaseBdev4", 01:26:12.693 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:12.693 "is_configured": false, 01:26:12.693 "data_offset": 0, 01:26:12.693 "data_size": 0 01:26:12.693 } 01:26:12.693 ] 01:26:12.693 }' 01:26:12.693 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:12.693 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.259 [2024-12-09 05:21:04.684710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:26:13.259 [2024-12-09 05:21:04.685161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:26:13.259 [2024-12-09 05:21:04.685181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:13.259 BaseBdev4 01:26:13.259 [2024-12-09 05:21:04.685581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:26:13.259 [2024-12-09 05:21:04.685823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:26:13.259 [2024-12-09 05:21:04.685948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:26:13.259 [2024-12-09 05:21:04.686156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.259 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.259 [ 01:26:13.259 { 01:26:13.259 "name": "BaseBdev4", 01:26:13.259 "aliases": [ 01:26:13.259 "ddf5e303-10a3-427d-94cf-75dabe067f98" 01:26:13.259 ], 01:26:13.259 "product_name": "Malloc disk", 01:26:13.259 "block_size": 512, 01:26:13.259 "num_blocks": 65536, 01:26:13.259 "uuid": "ddf5e303-10a3-427d-94cf-75dabe067f98", 01:26:13.259 "assigned_rate_limits": { 01:26:13.259 "rw_ios_per_sec": 0, 01:26:13.259 "rw_mbytes_per_sec": 0, 01:26:13.259 "r_mbytes_per_sec": 0, 01:26:13.259 "w_mbytes_per_sec": 0 01:26:13.259 }, 01:26:13.259 "claimed": true, 01:26:13.259 "claim_type": "exclusive_write", 01:26:13.259 "zoned": false, 01:26:13.259 "supported_io_types": { 01:26:13.259 "read": true, 01:26:13.259 "write": true, 01:26:13.259 "unmap": true, 01:26:13.259 "flush": true, 01:26:13.259 "reset": true, 01:26:13.259 "nvme_admin": false, 01:26:13.259 "nvme_io": false, 01:26:13.259 "nvme_io_md": false, 01:26:13.259 "write_zeroes": true, 01:26:13.259 "zcopy": true, 01:26:13.259 "get_zone_info": false, 01:26:13.259 "zone_management": false, 01:26:13.259 "zone_append": false, 01:26:13.259 "compare": false, 01:26:13.259 "compare_and_write": false, 01:26:13.259 "abort": true, 01:26:13.259 "seek_hole": false, 01:26:13.259 "seek_data": false, 01:26:13.259 "copy": true, 01:26:13.259 "nvme_iov_md": false 01:26:13.259 }, 01:26:13.259 "memory_domains": [ 01:26:13.259 { 01:26:13.260 "dma_device_id": "system", 01:26:13.260 "dma_device_type": 1 01:26:13.260 }, 01:26:13.260 { 01:26:13.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:13.260 "dma_device_type": 2 01:26:13.260 } 01:26:13.260 ], 01:26:13.260 "driver_specific": {} 01:26:13.260 } 01:26:13.260 ] 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:13.260 "name": "Existed_Raid", 01:26:13.260 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:13.260 "strip_size_kb": 0, 01:26:13.260 "state": "online", 01:26:13.260 "raid_level": "raid1", 01:26:13.260 "superblock": true, 01:26:13.260 "num_base_bdevs": 4, 01:26:13.260 "num_base_bdevs_discovered": 4, 01:26:13.260 "num_base_bdevs_operational": 4, 01:26:13.260 "base_bdevs_list": [ 01:26:13.260 { 01:26:13.260 "name": "BaseBdev1", 01:26:13.260 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:13.260 "is_configured": true, 01:26:13.260 "data_offset": 2048, 01:26:13.260 "data_size": 63488 01:26:13.260 }, 01:26:13.260 { 01:26:13.260 "name": "BaseBdev2", 01:26:13.260 "uuid": "d9244faf-4671-442d-aa41-a8c48b030773", 01:26:13.260 "is_configured": true, 01:26:13.260 "data_offset": 2048, 01:26:13.260 "data_size": 63488 01:26:13.260 }, 01:26:13.260 { 01:26:13.260 "name": "BaseBdev3", 01:26:13.260 "uuid": "8e65a08e-ab7c-464f-b650-a7c4d5e9130b", 01:26:13.260 "is_configured": true, 01:26:13.260 "data_offset": 2048, 01:26:13.260 "data_size": 63488 01:26:13.260 }, 01:26:13.260 { 01:26:13.260 "name": "BaseBdev4", 01:26:13.260 "uuid": "ddf5e303-10a3-427d-94cf-75dabe067f98", 01:26:13.260 "is_configured": true, 01:26:13.260 "data_offset": 2048, 01:26:13.260 "data_size": 63488 01:26:13.260 } 01:26:13.260 ] 01:26:13.260 }' 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:13.260 05:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.826 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:26:13.827 [2024-12-09 05:21:05.241403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:26:13.827 "name": "Existed_Raid", 01:26:13.827 "aliases": [ 01:26:13.827 "97ca5cdf-0181-4034-8cf2-0e2c578d27a7" 01:26:13.827 ], 01:26:13.827 "product_name": "Raid Volume", 01:26:13.827 "block_size": 512, 01:26:13.827 "num_blocks": 63488, 01:26:13.827 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:13.827 "assigned_rate_limits": { 01:26:13.827 "rw_ios_per_sec": 0, 01:26:13.827 "rw_mbytes_per_sec": 0, 01:26:13.827 "r_mbytes_per_sec": 0, 01:26:13.827 "w_mbytes_per_sec": 0 01:26:13.827 }, 01:26:13.827 "claimed": false, 01:26:13.827 "zoned": false, 01:26:13.827 "supported_io_types": { 01:26:13.827 "read": true, 01:26:13.827 "write": true, 01:26:13.827 "unmap": false, 01:26:13.827 "flush": false, 01:26:13.827 "reset": true, 01:26:13.827 "nvme_admin": false, 01:26:13.827 "nvme_io": false, 01:26:13.827 "nvme_io_md": false, 01:26:13.827 "write_zeroes": true, 01:26:13.827 "zcopy": false, 01:26:13.827 "get_zone_info": false, 01:26:13.827 "zone_management": false, 01:26:13.827 "zone_append": false, 01:26:13.827 "compare": false, 01:26:13.827 "compare_and_write": false, 01:26:13.827 "abort": false, 01:26:13.827 "seek_hole": false, 01:26:13.827 "seek_data": false, 01:26:13.827 "copy": false, 01:26:13.827 "nvme_iov_md": false 01:26:13.827 }, 01:26:13.827 "memory_domains": [ 01:26:13.827 { 01:26:13.827 "dma_device_id": "system", 01:26:13.827 "dma_device_type": 1 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:13.827 "dma_device_type": 2 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "system", 01:26:13.827 "dma_device_type": 1 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:13.827 "dma_device_type": 2 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "system", 01:26:13.827 "dma_device_type": 1 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:13.827 "dma_device_type": 2 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "system", 01:26:13.827 "dma_device_type": 1 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:13.827 "dma_device_type": 2 01:26:13.827 } 01:26:13.827 ], 01:26:13.827 "driver_specific": { 01:26:13.827 "raid": { 01:26:13.827 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:13.827 "strip_size_kb": 0, 01:26:13.827 "state": "online", 01:26:13.827 "raid_level": "raid1", 01:26:13.827 "superblock": true, 01:26:13.827 "num_base_bdevs": 4, 01:26:13.827 "num_base_bdevs_discovered": 4, 01:26:13.827 "num_base_bdevs_operational": 4, 01:26:13.827 "base_bdevs_list": [ 01:26:13.827 { 01:26:13.827 "name": "BaseBdev1", 01:26:13.827 "uuid": "bd6666cd-44b5-4be0-9891-4aea68ab5edd", 01:26:13.827 "is_configured": true, 01:26:13.827 "data_offset": 2048, 01:26:13.827 "data_size": 63488 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "name": "BaseBdev2", 01:26:13.827 "uuid": "d9244faf-4671-442d-aa41-a8c48b030773", 01:26:13.827 "is_configured": true, 01:26:13.827 "data_offset": 2048, 01:26:13.827 "data_size": 63488 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "name": "BaseBdev3", 01:26:13.827 "uuid": "8e65a08e-ab7c-464f-b650-a7c4d5e9130b", 01:26:13.827 "is_configured": true, 01:26:13.827 "data_offset": 2048, 01:26:13.827 "data_size": 63488 01:26:13.827 }, 01:26:13.827 { 01:26:13.827 "name": "BaseBdev4", 01:26:13.827 "uuid": "ddf5e303-10a3-427d-94cf-75dabe067f98", 01:26:13.827 "is_configured": true, 01:26:13.827 "data_offset": 2048, 01:26:13.827 "data_size": 63488 01:26:13.827 } 01:26:13.827 ] 01:26:13.827 } 01:26:13.827 } 01:26:13.827 }' 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:26:13.827 BaseBdev2 01:26:13.827 BaseBdev3 01:26:13.827 BaseBdev4' 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:13.827 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.085 [2024-12-09 05:21:05.609077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.085 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:14.344 "name": "Existed_Raid", 01:26:14.344 "uuid": "97ca5cdf-0181-4034-8cf2-0e2c578d27a7", 01:26:14.344 "strip_size_kb": 0, 01:26:14.344 "state": "online", 01:26:14.344 "raid_level": "raid1", 01:26:14.344 "superblock": true, 01:26:14.344 "num_base_bdevs": 4, 01:26:14.344 "num_base_bdevs_discovered": 3, 01:26:14.344 "num_base_bdevs_operational": 3, 01:26:14.344 "base_bdevs_list": [ 01:26:14.344 { 01:26:14.344 "name": null, 01:26:14.344 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:14.344 "is_configured": false, 01:26:14.344 "data_offset": 0, 01:26:14.344 "data_size": 63488 01:26:14.344 }, 01:26:14.344 { 01:26:14.344 "name": "BaseBdev2", 01:26:14.344 "uuid": "d9244faf-4671-442d-aa41-a8c48b030773", 01:26:14.344 "is_configured": true, 01:26:14.344 "data_offset": 2048, 01:26:14.344 "data_size": 63488 01:26:14.344 }, 01:26:14.344 { 01:26:14.344 "name": "BaseBdev3", 01:26:14.344 "uuid": "8e65a08e-ab7c-464f-b650-a7c4d5e9130b", 01:26:14.344 "is_configured": true, 01:26:14.344 "data_offset": 2048, 01:26:14.344 "data_size": 63488 01:26:14.344 }, 01:26:14.344 { 01:26:14.344 "name": "BaseBdev4", 01:26:14.344 "uuid": "ddf5e303-10a3-427d-94cf-75dabe067f98", 01:26:14.344 "is_configured": true, 01:26:14.344 "data_offset": 2048, 01:26:14.344 "data_size": 63488 01:26:14.344 } 01:26:14.344 ] 01:26:14.344 }' 01:26:14.344 05:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:14.345 05:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.603 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:26:14.603 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:14.603 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:14.603 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:26:14.603 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.603 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.861 [2024-12-09 05:21:06.290314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.861 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:14.861 [2024-12-09 05:21:06.429199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.119 [2024-12-09 05:21:06.578191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:26:15.119 [2024-12-09 05:21:06.578742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:15.119 [2024-12-09 05:21:06.692406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:15.119 [2024-12-09 05:21:06.692503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:15.119 [2024-12-09 05:21:06.692530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.119 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.378 BaseBdev2 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.378 [ 01:26:15.378 { 01:26:15.378 "name": "BaseBdev2", 01:26:15.378 "aliases": [ 01:26:15.378 "ee6c48b1-fee1-447a-ae9e-5037d8bfd894" 01:26:15.378 ], 01:26:15.378 "product_name": "Malloc disk", 01:26:15.378 "block_size": 512, 01:26:15.378 "num_blocks": 65536, 01:26:15.378 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:15.378 "assigned_rate_limits": { 01:26:15.378 "rw_ios_per_sec": 0, 01:26:15.378 "rw_mbytes_per_sec": 0, 01:26:15.378 "r_mbytes_per_sec": 0, 01:26:15.378 "w_mbytes_per_sec": 0 01:26:15.378 }, 01:26:15.378 "claimed": false, 01:26:15.378 "zoned": false, 01:26:15.378 "supported_io_types": { 01:26:15.378 "read": true, 01:26:15.378 "write": true, 01:26:15.378 "unmap": true, 01:26:15.378 "flush": true, 01:26:15.378 "reset": true, 01:26:15.378 "nvme_admin": false, 01:26:15.378 "nvme_io": false, 01:26:15.378 "nvme_io_md": false, 01:26:15.378 "write_zeroes": true, 01:26:15.378 "zcopy": true, 01:26:15.378 "get_zone_info": false, 01:26:15.378 "zone_management": false, 01:26:15.378 "zone_append": false, 01:26:15.378 "compare": false, 01:26:15.378 "compare_and_write": false, 01:26:15.378 "abort": true, 01:26:15.378 "seek_hole": false, 01:26:15.378 "seek_data": false, 01:26:15.378 "copy": true, 01:26:15.378 "nvme_iov_md": false 01:26:15.378 }, 01:26:15.378 "memory_domains": [ 01:26:15.378 { 01:26:15.378 "dma_device_id": "system", 01:26:15.378 "dma_device_type": 1 01:26:15.378 }, 01:26:15.378 { 01:26:15.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:15.378 "dma_device_type": 2 01:26:15.378 } 01:26:15.378 ], 01:26:15.378 "driver_specific": {} 01:26:15.378 } 01:26:15.378 ] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.378 BaseBdev3 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.378 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.378 [ 01:26:15.378 { 01:26:15.378 "name": "BaseBdev3", 01:26:15.378 "aliases": [ 01:26:15.378 "16e43b35-aaa6-44d7-8de2-a09479219310" 01:26:15.378 ], 01:26:15.378 "product_name": "Malloc disk", 01:26:15.378 "block_size": 512, 01:26:15.378 "num_blocks": 65536, 01:26:15.378 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:15.378 "assigned_rate_limits": { 01:26:15.378 "rw_ios_per_sec": 0, 01:26:15.378 "rw_mbytes_per_sec": 0, 01:26:15.378 "r_mbytes_per_sec": 0, 01:26:15.378 "w_mbytes_per_sec": 0 01:26:15.378 }, 01:26:15.378 "claimed": false, 01:26:15.378 "zoned": false, 01:26:15.378 "supported_io_types": { 01:26:15.378 "read": true, 01:26:15.378 "write": true, 01:26:15.378 "unmap": true, 01:26:15.378 "flush": true, 01:26:15.378 "reset": true, 01:26:15.378 "nvme_admin": false, 01:26:15.378 "nvme_io": false, 01:26:15.378 "nvme_io_md": false, 01:26:15.378 "write_zeroes": true, 01:26:15.378 "zcopy": true, 01:26:15.378 "get_zone_info": false, 01:26:15.378 "zone_management": false, 01:26:15.378 "zone_append": false, 01:26:15.378 "compare": false, 01:26:15.378 "compare_and_write": false, 01:26:15.379 "abort": true, 01:26:15.379 "seek_hole": false, 01:26:15.379 "seek_data": false, 01:26:15.379 "copy": true, 01:26:15.379 "nvme_iov_md": false 01:26:15.379 }, 01:26:15.379 "memory_domains": [ 01:26:15.379 { 01:26:15.379 "dma_device_id": "system", 01:26:15.379 "dma_device_type": 1 01:26:15.379 }, 01:26:15.379 { 01:26:15.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:15.379 "dma_device_type": 2 01:26:15.379 } 01:26:15.379 ], 01:26:15.379 "driver_specific": {} 01:26:15.379 } 01:26:15.379 ] 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.379 BaseBdev4 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.379 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.379 [ 01:26:15.379 { 01:26:15.379 "name": "BaseBdev4", 01:26:15.379 "aliases": [ 01:26:15.379 "49c8fbd6-35f7-423d-be9a-43244c66b748" 01:26:15.379 ], 01:26:15.379 "product_name": "Malloc disk", 01:26:15.379 "block_size": 512, 01:26:15.379 "num_blocks": 65536, 01:26:15.379 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:15.379 "assigned_rate_limits": { 01:26:15.379 "rw_ios_per_sec": 0, 01:26:15.379 "rw_mbytes_per_sec": 0, 01:26:15.379 "r_mbytes_per_sec": 0, 01:26:15.379 "w_mbytes_per_sec": 0 01:26:15.379 }, 01:26:15.379 "claimed": false, 01:26:15.379 "zoned": false, 01:26:15.379 "supported_io_types": { 01:26:15.379 "read": true, 01:26:15.379 "write": true, 01:26:15.379 "unmap": true, 01:26:15.379 "flush": true, 01:26:15.379 "reset": true, 01:26:15.379 "nvme_admin": false, 01:26:15.379 "nvme_io": false, 01:26:15.379 "nvme_io_md": false, 01:26:15.379 "write_zeroes": true, 01:26:15.379 "zcopy": true, 01:26:15.379 "get_zone_info": false, 01:26:15.379 "zone_management": false, 01:26:15.379 "zone_append": false, 01:26:15.379 "compare": false, 01:26:15.379 "compare_and_write": false, 01:26:15.379 "abort": true, 01:26:15.379 "seek_hole": false, 01:26:15.379 "seek_data": false, 01:26:15.638 "copy": true, 01:26:15.638 "nvme_iov_md": false 01:26:15.638 }, 01:26:15.638 "memory_domains": [ 01:26:15.638 { 01:26:15.638 "dma_device_id": "system", 01:26:15.638 "dma_device_type": 1 01:26:15.638 }, 01:26:15.638 { 01:26:15.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:15.638 "dma_device_type": 2 01:26:15.638 } 01:26:15.638 ], 01:26:15.638 "driver_specific": {} 01:26:15.638 } 01:26:15.638 ] 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.638 05:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.638 [2024-12-09 05:21:07.002000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:26:15.638 [2024-12-09 05:21:07.002319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:26:15.638 [2024-12-09 05:21:07.002533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:15.638 [2024-12-09 05:21:07.005256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:15.638 [2024-12-09 05:21:07.005559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:15.638 "name": "Existed_Raid", 01:26:15.638 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:15.638 "strip_size_kb": 0, 01:26:15.638 "state": "configuring", 01:26:15.638 "raid_level": "raid1", 01:26:15.638 "superblock": true, 01:26:15.638 "num_base_bdevs": 4, 01:26:15.638 "num_base_bdevs_discovered": 3, 01:26:15.638 "num_base_bdevs_operational": 4, 01:26:15.638 "base_bdevs_list": [ 01:26:15.638 { 01:26:15.638 "name": "BaseBdev1", 01:26:15.638 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:15.638 "is_configured": false, 01:26:15.638 "data_offset": 0, 01:26:15.638 "data_size": 0 01:26:15.638 }, 01:26:15.638 { 01:26:15.638 "name": "BaseBdev2", 01:26:15.638 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:15.638 "is_configured": true, 01:26:15.638 "data_offset": 2048, 01:26:15.638 "data_size": 63488 01:26:15.638 }, 01:26:15.638 { 01:26:15.638 "name": "BaseBdev3", 01:26:15.638 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:15.638 "is_configured": true, 01:26:15.638 "data_offset": 2048, 01:26:15.638 "data_size": 63488 01:26:15.638 }, 01:26:15.638 { 01:26:15.638 "name": "BaseBdev4", 01:26:15.638 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:15.638 "is_configured": true, 01:26:15.638 "data_offset": 2048, 01:26:15.638 "data_size": 63488 01:26:15.638 } 01:26:15.638 ] 01:26:15.638 }' 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:15.638 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.205 [2024-12-09 05:21:07.546291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:16.205 "name": "Existed_Raid", 01:26:16.205 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:16.205 "strip_size_kb": 0, 01:26:16.205 "state": "configuring", 01:26:16.205 "raid_level": "raid1", 01:26:16.205 "superblock": true, 01:26:16.205 "num_base_bdevs": 4, 01:26:16.205 "num_base_bdevs_discovered": 2, 01:26:16.205 "num_base_bdevs_operational": 4, 01:26:16.205 "base_bdevs_list": [ 01:26:16.205 { 01:26:16.205 "name": "BaseBdev1", 01:26:16.205 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:16.205 "is_configured": false, 01:26:16.205 "data_offset": 0, 01:26:16.205 "data_size": 0 01:26:16.205 }, 01:26:16.205 { 01:26:16.205 "name": null, 01:26:16.205 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:16.205 "is_configured": false, 01:26:16.205 "data_offset": 0, 01:26:16.205 "data_size": 63488 01:26:16.205 }, 01:26:16.205 { 01:26:16.205 "name": "BaseBdev3", 01:26:16.205 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:16.205 "is_configured": true, 01:26:16.205 "data_offset": 2048, 01:26:16.205 "data_size": 63488 01:26:16.205 }, 01:26:16.205 { 01:26:16.205 "name": "BaseBdev4", 01:26:16.205 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:16.205 "is_configured": true, 01:26:16.205 "data_offset": 2048, 01:26:16.205 "data_size": 63488 01:26:16.205 } 01:26:16.205 ] 01:26:16.205 }' 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:16.205 05:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.463 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:26:16.463 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:16.463 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.463 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.463 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.722 [2024-12-09 05:21:08.149724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:16.722 BaseBdev1 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.722 [ 01:26:16.722 { 01:26:16.722 "name": "BaseBdev1", 01:26:16.722 "aliases": [ 01:26:16.722 "afc11387-3484-4a5b-8f6a-934089407d62" 01:26:16.722 ], 01:26:16.722 "product_name": "Malloc disk", 01:26:16.722 "block_size": 512, 01:26:16.722 "num_blocks": 65536, 01:26:16.722 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:16.722 "assigned_rate_limits": { 01:26:16.722 "rw_ios_per_sec": 0, 01:26:16.722 "rw_mbytes_per_sec": 0, 01:26:16.722 "r_mbytes_per_sec": 0, 01:26:16.722 "w_mbytes_per_sec": 0 01:26:16.722 }, 01:26:16.722 "claimed": true, 01:26:16.722 "claim_type": "exclusive_write", 01:26:16.722 "zoned": false, 01:26:16.722 "supported_io_types": { 01:26:16.722 "read": true, 01:26:16.722 "write": true, 01:26:16.722 "unmap": true, 01:26:16.722 "flush": true, 01:26:16.722 "reset": true, 01:26:16.722 "nvme_admin": false, 01:26:16.722 "nvme_io": false, 01:26:16.722 "nvme_io_md": false, 01:26:16.722 "write_zeroes": true, 01:26:16.722 "zcopy": true, 01:26:16.722 "get_zone_info": false, 01:26:16.722 "zone_management": false, 01:26:16.722 "zone_append": false, 01:26:16.722 "compare": false, 01:26:16.722 "compare_and_write": false, 01:26:16.722 "abort": true, 01:26:16.722 "seek_hole": false, 01:26:16.722 "seek_data": false, 01:26:16.722 "copy": true, 01:26:16.722 "nvme_iov_md": false 01:26:16.722 }, 01:26:16.722 "memory_domains": [ 01:26:16.722 { 01:26:16.722 "dma_device_id": "system", 01:26:16.722 "dma_device_type": 1 01:26:16.722 }, 01:26:16.722 { 01:26:16.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:16.722 "dma_device_type": 2 01:26:16.722 } 01:26:16.722 ], 01:26:16.722 "driver_specific": {} 01:26:16.722 } 01:26:16.722 ] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.722 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:16.722 "name": "Existed_Raid", 01:26:16.722 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:16.722 "strip_size_kb": 0, 01:26:16.722 "state": "configuring", 01:26:16.722 "raid_level": "raid1", 01:26:16.722 "superblock": true, 01:26:16.722 "num_base_bdevs": 4, 01:26:16.722 "num_base_bdevs_discovered": 3, 01:26:16.722 "num_base_bdevs_operational": 4, 01:26:16.722 "base_bdevs_list": [ 01:26:16.722 { 01:26:16.722 "name": "BaseBdev1", 01:26:16.722 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:16.722 "is_configured": true, 01:26:16.722 "data_offset": 2048, 01:26:16.722 "data_size": 63488 01:26:16.722 }, 01:26:16.722 { 01:26:16.722 "name": null, 01:26:16.722 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:16.722 "is_configured": false, 01:26:16.722 "data_offset": 0, 01:26:16.722 "data_size": 63488 01:26:16.722 }, 01:26:16.722 { 01:26:16.722 "name": "BaseBdev3", 01:26:16.722 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:16.722 "is_configured": true, 01:26:16.722 "data_offset": 2048, 01:26:16.722 "data_size": 63488 01:26:16.722 }, 01:26:16.723 { 01:26:16.723 "name": "BaseBdev4", 01:26:16.723 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:16.723 "is_configured": true, 01:26:16.723 "data_offset": 2048, 01:26:16.723 "data_size": 63488 01:26:16.723 } 01:26:16.723 ] 01:26:16.723 }' 01:26:16.723 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:16.723 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.290 [2024-12-09 05:21:08.762096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:17.290 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:17.291 "name": "Existed_Raid", 01:26:17.291 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:17.291 "strip_size_kb": 0, 01:26:17.291 "state": "configuring", 01:26:17.291 "raid_level": "raid1", 01:26:17.291 "superblock": true, 01:26:17.291 "num_base_bdevs": 4, 01:26:17.291 "num_base_bdevs_discovered": 2, 01:26:17.291 "num_base_bdevs_operational": 4, 01:26:17.291 "base_bdevs_list": [ 01:26:17.291 { 01:26:17.291 "name": "BaseBdev1", 01:26:17.291 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:17.291 "is_configured": true, 01:26:17.291 "data_offset": 2048, 01:26:17.291 "data_size": 63488 01:26:17.291 }, 01:26:17.291 { 01:26:17.291 "name": null, 01:26:17.291 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:17.291 "is_configured": false, 01:26:17.291 "data_offset": 0, 01:26:17.291 "data_size": 63488 01:26:17.291 }, 01:26:17.291 { 01:26:17.291 "name": null, 01:26:17.291 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:17.291 "is_configured": false, 01:26:17.291 "data_offset": 0, 01:26:17.291 "data_size": 63488 01:26:17.291 }, 01:26:17.291 { 01:26:17.291 "name": "BaseBdev4", 01:26:17.291 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:17.291 "is_configured": true, 01:26:17.291 "data_offset": 2048, 01:26:17.291 "data_size": 63488 01:26:17.291 } 01:26:17.291 ] 01:26:17.291 }' 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:17.291 05:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.858 [2024-12-09 05:21:09.346158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.858 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:17.858 "name": "Existed_Raid", 01:26:17.858 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:17.858 "strip_size_kb": 0, 01:26:17.858 "state": "configuring", 01:26:17.858 "raid_level": "raid1", 01:26:17.858 "superblock": true, 01:26:17.858 "num_base_bdevs": 4, 01:26:17.858 "num_base_bdevs_discovered": 3, 01:26:17.858 "num_base_bdevs_operational": 4, 01:26:17.858 "base_bdevs_list": [ 01:26:17.858 { 01:26:17.858 "name": "BaseBdev1", 01:26:17.858 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:17.858 "is_configured": true, 01:26:17.858 "data_offset": 2048, 01:26:17.858 "data_size": 63488 01:26:17.859 }, 01:26:17.859 { 01:26:17.859 "name": null, 01:26:17.859 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:17.859 "is_configured": false, 01:26:17.859 "data_offset": 0, 01:26:17.859 "data_size": 63488 01:26:17.859 }, 01:26:17.859 { 01:26:17.859 "name": "BaseBdev3", 01:26:17.859 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:17.859 "is_configured": true, 01:26:17.859 "data_offset": 2048, 01:26:17.859 "data_size": 63488 01:26:17.859 }, 01:26:17.859 { 01:26:17.859 "name": "BaseBdev4", 01:26:17.859 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:17.859 "is_configured": true, 01:26:17.859 "data_offset": 2048, 01:26:17.859 "data_size": 63488 01:26:17.859 } 01:26:17.859 ] 01:26:17.859 }' 01:26:17.859 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:17.859 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:18.425 05:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:18.425 [2024-12-09 05:21:09.914405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:18.425 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:18.683 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:18.683 "name": "Existed_Raid", 01:26:18.683 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:18.683 "strip_size_kb": 0, 01:26:18.683 "state": "configuring", 01:26:18.683 "raid_level": "raid1", 01:26:18.683 "superblock": true, 01:26:18.683 "num_base_bdevs": 4, 01:26:18.683 "num_base_bdevs_discovered": 2, 01:26:18.683 "num_base_bdevs_operational": 4, 01:26:18.683 "base_bdevs_list": [ 01:26:18.683 { 01:26:18.683 "name": null, 01:26:18.683 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:18.683 "is_configured": false, 01:26:18.683 "data_offset": 0, 01:26:18.683 "data_size": 63488 01:26:18.683 }, 01:26:18.683 { 01:26:18.683 "name": null, 01:26:18.683 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:18.683 "is_configured": false, 01:26:18.683 "data_offset": 0, 01:26:18.683 "data_size": 63488 01:26:18.683 }, 01:26:18.683 { 01:26:18.683 "name": "BaseBdev3", 01:26:18.683 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:18.683 "is_configured": true, 01:26:18.683 "data_offset": 2048, 01:26:18.683 "data_size": 63488 01:26:18.683 }, 01:26:18.683 { 01:26:18.683 "name": "BaseBdev4", 01:26:18.683 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:18.683 "is_configured": true, 01:26:18.683 "data_offset": 2048, 01:26:18.683 "data_size": 63488 01:26:18.683 } 01:26:18.683 ] 01:26:18.683 }' 01:26:18.683 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:18.683 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.248 [2024-12-09 05:21:10.615585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:19.248 "name": "Existed_Raid", 01:26:19.248 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:19.248 "strip_size_kb": 0, 01:26:19.248 "state": "configuring", 01:26:19.248 "raid_level": "raid1", 01:26:19.248 "superblock": true, 01:26:19.248 "num_base_bdevs": 4, 01:26:19.248 "num_base_bdevs_discovered": 3, 01:26:19.248 "num_base_bdevs_operational": 4, 01:26:19.248 "base_bdevs_list": [ 01:26:19.248 { 01:26:19.248 "name": null, 01:26:19.248 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:19.248 "is_configured": false, 01:26:19.248 "data_offset": 0, 01:26:19.248 "data_size": 63488 01:26:19.248 }, 01:26:19.248 { 01:26:19.248 "name": "BaseBdev2", 01:26:19.248 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:19.248 "is_configured": true, 01:26:19.248 "data_offset": 2048, 01:26:19.248 "data_size": 63488 01:26:19.248 }, 01:26:19.248 { 01:26:19.248 "name": "BaseBdev3", 01:26:19.248 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:19.248 "is_configured": true, 01:26:19.248 "data_offset": 2048, 01:26:19.248 "data_size": 63488 01:26:19.248 }, 01:26:19.248 { 01:26:19.248 "name": "BaseBdev4", 01:26:19.248 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:19.248 "is_configured": true, 01:26:19.248 "data_offset": 2048, 01:26:19.248 "data_size": 63488 01:26:19.248 } 01:26:19.248 ] 01:26:19.248 }' 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:19.248 05:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u afc11387-3484-4a5b-8f6a-934089407d62 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.815 [2024-12-09 05:21:11.295028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:26:19.815 [2024-12-09 05:21:11.295354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:26:19.815 [2024-12-09 05:21:11.295440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:19.815 NewBaseBdev 01:26:19.815 [2024-12-09 05:21:11.295767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:26:19.815 [2024-12-09 05:21:11.295970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:26:19.815 [2024-12-09 05:21:11.295988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:26:19.815 [2024-12-09 05:21:11.296162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.815 [ 01:26:19.815 { 01:26:19.815 "name": "NewBaseBdev", 01:26:19.815 "aliases": [ 01:26:19.815 "afc11387-3484-4a5b-8f6a-934089407d62" 01:26:19.815 ], 01:26:19.815 "product_name": "Malloc disk", 01:26:19.815 "block_size": 512, 01:26:19.815 "num_blocks": 65536, 01:26:19.815 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:19.815 "assigned_rate_limits": { 01:26:19.815 "rw_ios_per_sec": 0, 01:26:19.815 "rw_mbytes_per_sec": 0, 01:26:19.815 "r_mbytes_per_sec": 0, 01:26:19.815 "w_mbytes_per_sec": 0 01:26:19.815 }, 01:26:19.815 "claimed": true, 01:26:19.815 "claim_type": "exclusive_write", 01:26:19.815 "zoned": false, 01:26:19.815 "supported_io_types": { 01:26:19.815 "read": true, 01:26:19.815 "write": true, 01:26:19.815 "unmap": true, 01:26:19.815 "flush": true, 01:26:19.815 "reset": true, 01:26:19.815 "nvme_admin": false, 01:26:19.815 "nvme_io": false, 01:26:19.815 "nvme_io_md": false, 01:26:19.815 "write_zeroes": true, 01:26:19.815 "zcopy": true, 01:26:19.815 "get_zone_info": false, 01:26:19.815 "zone_management": false, 01:26:19.815 "zone_append": false, 01:26:19.815 "compare": false, 01:26:19.815 "compare_and_write": false, 01:26:19.815 "abort": true, 01:26:19.815 "seek_hole": false, 01:26:19.815 "seek_data": false, 01:26:19.815 "copy": true, 01:26:19.815 "nvme_iov_md": false 01:26:19.815 }, 01:26:19.815 "memory_domains": [ 01:26:19.815 { 01:26:19.815 "dma_device_id": "system", 01:26:19.815 "dma_device_type": 1 01:26:19.815 }, 01:26:19.815 { 01:26:19.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:19.815 "dma_device_type": 2 01:26:19.815 } 01:26:19.815 ], 01:26:19.815 "driver_specific": {} 01:26:19.815 } 01:26:19.815 ] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.815 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:26:19.816 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:19.816 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.816 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:19.816 "name": "Existed_Raid", 01:26:19.816 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:19.816 "strip_size_kb": 0, 01:26:19.816 "state": "online", 01:26:19.816 "raid_level": "raid1", 01:26:19.816 "superblock": true, 01:26:19.816 "num_base_bdevs": 4, 01:26:19.816 "num_base_bdevs_discovered": 4, 01:26:19.816 "num_base_bdevs_operational": 4, 01:26:19.816 "base_bdevs_list": [ 01:26:19.816 { 01:26:19.816 "name": "NewBaseBdev", 01:26:19.816 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:19.816 "is_configured": true, 01:26:19.816 "data_offset": 2048, 01:26:19.816 "data_size": 63488 01:26:19.816 }, 01:26:19.816 { 01:26:19.816 "name": "BaseBdev2", 01:26:19.816 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:19.816 "is_configured": true, 01:26:19.816 "data_offset": 2048, 01:26:19.816 "data_size": 63488 01:26:19.816 }, 01:26:19.816 { 01:26:19.816 "name": "BaseBdev3", 01:26:19.816 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:19.816 "is_configured": true, 01:26:19.816 "data_offset": 2048, 01:26:19.816 "data_size": 63488 01:26:19.816 }, 01:26:19.816 { 01:26:19.816 "name": "BaseBdev4", 01:26:19.816 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:19.816 "is_configured": true, 01:26:19.816 "data_offset": 2048, 01:26:19.816 "data_size": 63488 01:26:19.816 } 01:26:19.816 ] 01:26:19.816 }' 01:26:19.816 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:19.816 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.382 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.383 [2024-12-09 05:21:11.871760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:20.383 05:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.383 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:26:20.383 "name": "Existed_Raid", 01:26:20.383 "aliases": [ 01:26:20.383 "729819dd-c55c-481b-ada6-b555b6420753" 01:26:20.383 ], 01:26:20.383 "product_name": "Raid Volume", 01:26:20.383 "block_size": 512, 01:26:20.383 "num_blocks": 63488, 01:26:20.383 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:20.383 "assigned_rate_limits": { 01:26:20.383 "rw_ios_per_sec": 0, 01:26:20.383 "rw_mbytes_per_sec": 0, 01:26:20.383 "r_mbytes_per_sec": 0, 01:26:20.383 "w_mbytes_per_sec": 0 01:26:20.383 }, 01:26:20.383 "claimed": false, 01:26:20.383 "zoned": false, 01:26:20.383 "supported_io_types": { 01:26:20.383 "read": true, 01:26:20.383 "write": true, 01:26:20.383 "unmap": false, 01:26:20.383 "flush": false, 01:26:20.383 "reset": true, 01:26:20.383 "nvme_admin": false, 01:26:20.383 "nvme_io": false, 01:26:20.383 "nvme_io_md": false, 01:26:20.383 "write_zeroes": true, 01:26:20.383 "zcopy": false, 01:26:20.383 "get_zone_info": false, 01:26:20.383 "zone_management": false, 01:26:20.383 "zone_append": false, 01:26:20.383 "compare": false, 01:26:20.383 "compare_and_write": false, 01:26:20.383 "abort": false, 01:26:20.383 "seek_hole": false, 01:26:20.383 "seek_data": false, 01:26:20.383 "copy": false, 01:26:20.383 "nvme_iov_md": false 01:26:20.383 }, 01:26:20.383 "memory_domains": [ 01:26:20.383 { 01:26:20.383 "dma_device_id": "system", 01:26:20.383 "dma_device_type": 1 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:20.383 "dma_device_type": 2 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "system", 01:26:20.383 "dma_device_type": 1 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:20.383 "dma_device_type": 2 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "system", 01:26:20.383 "dma_device_type": 1 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:20.383 "dma_device_type": 2 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "system", 01:26:20.383 "dma_device_type": 1 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:20.383 "dma_device_type": 2 01:26:20.383 } 01:26:20.383 ], 01:26:20.383 "driver_specific": { 01:26:20.383 "raid": { 01:26:20.383 "uuid": "729819dd-c55c-481b-ada6-b555b6420753", 01:26:20.383 "strip_size_kb": 0, 01:26:20.383 "state": "online", 01:26:20.383 "raid_level": "raid1", 01:26:20.383 "superblock": true, 01:26:20.383 "num_base_bdevs": 4, 01:26:20.383 "num_base_bdevs_discovered": 4, 01:26:20.383 "num_base_bdevs_operational": 4, 01:26:20.383 "base_bdevs_list": [ 01:26:20.383 { 01:26:20.383 "name": "NewBaseBdev", 01:26:20.383 "uuid": "afc11387-3484-4a5b-8f6a-934089407d62", 01:26:20.383 "is_configured": true, 01:26:20.383 "data_offset": 2048, 01:26:20.383 "data_size": 63488 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "name": "BaseBdev2", 01:26:20.383 "uuid": "ee6c48b1-fee1-447a-ae9e-5037d8bfd894", 01:26:20.383 "is_configured": true, 01:26:20.383 "data_offset": 2048, 01:26:20.383 "data_size": 63488 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "name": "BaseBdev3", 01:26:20.383 "uuid": "16e43b35-aaa6-44d7-8de2-a09479219310", 01:26:20.383 "is_configured": true, 01:26:20.383 "data_offset": 2048, 01:26:20.383 "data_size": 63488 01:26:20.383 }, 01:26:20.383 { 01:26:20.383 "name": "BaseBdev4", 01:26:20.383 "uuid": "49c8fbd6-35f7-423d-be9a-43244c66b748", 01:26:20.383 "is_configured": true, 01:26:20.383 "data_offset": 2048, 01:26:20.383 "data_size": 63488 01:26:20.383 } 01:26:20.383 ] 01:26:20.383 } 01:26:20.383 } 01:26:20.383 }' 01:26:20.383 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:26:20.383 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:26:20.383 BaseBdev2 01:26:20.383 BaseBdev3 01:26:20.383 BaseBdev4' 01:26:20.383 05:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:20.642 [2024-12-09 05:21:12.247406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:26:20.642 [2024-12-09 05:21:12.247443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:20.642 [2024-12-09 05:21:12.247556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:20.642 [2024-12-09 05:21:12.247975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:20.642 [2024-12-09 05:21:12.247997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73937 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73937 ']' 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73937 01:26:20.642 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73937 01:26:20.900 killing process with pid 73937 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73937' 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73937 01:26:20.900 [2024-12-09 05:21:12.283947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:26:20.900 05:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73937 01:26:21.158 [2024-12-09 05:21:12.618430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:26:22.530 ************************************ 01:26:22.530 END TEST raid_state_function_test_sb 01:26:22.530 ************************************ 01:26:22.530 05:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:26:22.530 01:26:22.530 real 0m13.141s 01:26:22.530 user 0m21.713s 01:26:22.530 sys 0m1.807s 01:26:22.530 05:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:22.530 05:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:26:22.530 05:21:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 01:26:22.530 05:21:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:26:22.530 05:21:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:22.530 05:21:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:26:22.530 ************************************ 01:26:22.530 START TEST raid_superblock_test 01:26:22.530 ************************************ 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:26:22.530 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74624 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74624 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74624 ']' 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:22.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:22.531 05:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:22.531 [2024-12-09 05:21:13.945719] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:22.531 [2024-12-09 05:21:13.945982] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74624 ] 01:26:22.531 [2024-12-09 05:21:14.136496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:22.789 [2024-12-09 05:21:14.268030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:23.046 [2024-12-09 05:21:14.469549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:23.046 [2024-12-09 05:21:14.469785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:26:23.620 05:21:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 malloc1 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 [2024-12-09 05:21:15.050925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:26:23.620 [2024-12-09 05:21:15.051189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:23.620 [2024-12-09 05:21:15.051375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:26:23.620 [2024-12-09 05:21:15.051508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:23.620 [2024-12-09 05:21:15.054469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:23.620 [2024-12-09 05:21:15.054648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:26:23.620 pt1 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 malloc2 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 [2024-12-09 05:21:15.107693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:26:23.620 [2024-12-09 05:21:15.107959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:23.620 [2024-12-09 05:21:15.108056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:26:23.620 [2024-12-09 05:21:15.108265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:23.620 [2024-12-09 05:21:15.111135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:23.620 [2024-12-09 05:21:15.111333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:26:23.620 pt2 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 malloc3 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 [2024-12-09 05:21:15.170858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:26:23.620 [2024-12-09 05:21:15.170930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:23.620 [2024-12-09 05:21:15.170965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:26:23.620 [2024-12-09 05:21:15.170980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:23.620 [2024-12-09 05:21:15.173868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:23.620 [2024-12-09 05:21:15.174090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:26:23.620 pt3 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 malloc4 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.620 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.620 [2024-12-09 05:21:15.226255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:26:23.620 [2024-12-09 05:21:15.226535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:23.620 [2024-12-09 05:21:15.226616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:26:23.621 [2024-12-09 05:21:15.226802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:23.621 [2024-12-09 05:21:15.229764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:23.621 [2024-12-09 05:21:15.230019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:26:23.621 pt4 01:26:23.621 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.621 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:26:23.621 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:26:23.621 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 01:26:23.621 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.621 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.877 [2024-12-09 05:21:15.238319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:26:23.877 [2024-12-09 05:21:15.241112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:26:23.877 [2024-12-09 05:21:15.241355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:26:23.877 [2024-12-09 05:21:15.241642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:26:23.877 [2024-12-09 05:21:15.242023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:26:23.877 [2024-12-09 05:21:15.242188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:23.877 [2024-12-09 05:21:15.242587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:26:23.877 [2024-12-09 05:21:15.242953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:26:23.877 [2024-12-09 05:21:15.243091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:26:23.877 [2024-12-09 05:21:15.243542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:23.878 "name": "raid_bdev1", 01:26:23.878 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:23.878 "strip_size_kb": 0, 01:26:23.878 "state": "online", 01:26:23.878 "raid_level": "raid1", 01:26:23.878 "superblock": true, 01:26:23.878 "num_base_bdevs": 4, 01:26:23.878 "num_base_bdevs_discovered": 4, 01:26:23.878 "num_base_bdevs_operational": 4, 01:26:23.878 "base_bdevs_list": [ 01:26:23.878 { 01:26:23.878 "name": "pt1", 01:26:23.878 "uuid": "00000000-0000-0000-0000-000000000001", 01:26:23.878 "is_configured": true, 01:26:23.878 "data_offset": 2048, 01:26:23.878 "data_size": 63488 01:26:23.878 }, 01:26:23.878 { 01:26:23.878 "name": "pt2", 01:26:23.878 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:23.878 "is_configured": true, 01:26:23.878 "data_offset": 2048, 01:26:23.878 "data_size": 63488 01:26:23.878 }, 01:26:23.878 { 01:26:23.878 "name": "pt3", 01:26:23.878 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:23.878 "is_configured": true, 01:26:23.878 "data_offset": 2048, 01:26:23.878 "data_size": 63488 01:26:23.878 }, 01:26:23.878 { 01:26:23.878 "name": "pt4", 01:26:23.878 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:23.878 "is_configured": true, 01:26:23.878 "data_offset": 2048, 01:26:23.878 "data_size": 63488 01:26:23.878 } 01:26:23.878 ] 01:26:23.878 }' 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:23.878 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:26:24.444 [2024-12-09 05:21:15.792103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.444 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:26:24.444 "name": "raid_bdev1", 01:26:24.444 "aliases": [ 01:26:24.444 "d6b7ca45-8a5d-4e02-a931-1d2971f0251c" 01:26:24.444 ], 01:26:24.444 "product_name": "Raid Volume", 01:26:24.444 "block_size": 512, 01:26:24.444 "num_blocks": 63488, 01:26:24.444 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:24.444 "assigned_rate_limits": { 01:26:24.444 "rw_ios_per_sec": 0, 01:26:24.444 "rw_mbytes_per_sec": 0, 01:26:24.444 "r_mbytes_per_sec": 0, 01:26:24.444 "w_mbytes_per_sec": 0 01:26:24.444 }, 01:26:24.444 "claimed": false, 01:26:24.444 "zoned": false, 01:26:24.444 "supported_io_types": { 01:26:24.444 "read": true, 01:26:24.444 "write": true, 01:26:24.444 "unmap": false, 01:26:24.444 "flush": false, 01:26:24.444 "reset": true, 01:26:24.444 "nvme_admin": false, 01:26:24.444 "nvme_io": false, 01:26:24.444 "nvme_io_md": false, 01:26:24.444 "write_zeroes": true, 01:26:24.444 "zcopy": false, 01:26:24.444 "get_zone_info": false, 01:26:24.444 "zone_management": false, 01:26:24.444 "zone_append": false, 01:26:24.444 "compare": false, 01:26:24.444 "compare_and_write": false, 01:26:24.444 "abort": false, 01:26:24.444 "seek_hole": false, 01:26:24.444 "seek_data": false, 01:26:24.444 "copy": false, 01:26:24.444 "nvme_iov_md": false 01:26:24.444 }, 01:26:24.444 "memory_domains": [ 01:26:24.444 { 01:26:24.444 "dma_device_id": "system", 01:26:24.444 "dma_device_type": 1 01:26:24.444 }, 01:26:24.444 { 01:26:24.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:24.444 "dma_device_type": 2 01:26:24.444 }, 01:26:24.444 { 01:26:24.444 "dma_device_id": "system", 01:26:24.444 "dma_device_type": 1 01:26:24.444 }, 01:26:24.444 { 01:26:24.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:24.445 "dma_device_type": 2 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "dma_device_id": "system", 01:26:24.445 "dma_device_type": 1 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:24.445 "dma_device_type": 2 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "dma_device_id": "system", 01:26:24.445 "dma_device_type": 1 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:24.445 "dma_device_type": 2 01:26:24.445 } 01:26:24.445 ], 01:26:24.445 "driver_specific": { 01:26:24.445 "raid": { 01:26:24.445 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:24.445 "strip_size_kb": 0, 01:26:24.445 "state": "online", 01:26:24.445 "raid_level": "raid1", 01:26:24.445 "superblock": true, 01:26:24.445 "num_base_bdevs": 4, 01:26:24.445 "num_base_bdevs_discovered": 4, 01:26:24.445 "num_base_bdevs_operational": 4, 01:26:24.445 "base_bdevs_list": [ 01:26:24.445 { 01:26:24.445 "name": "pt1", 01:26:24.445 "uuid": "00000000-0000-0000-0000-000000000001", 01:26:24.445 "is_configured": true, 01:26:24.445 "data_offset": 2048, 01:26:24.445 "data_size": 63488 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "name": "pt2", 01:26:24.445 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:24.445 "is_configured": true, 01:26:24.445 "data_offset": 2048, 01:26:24.445 "data_size": 63488 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "name": "pt3", 01:26:24.445 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:24.445 "is_configured": true, 01:26:24.445 "data_offset": 2048, 01:26:24.445 "data_size": 63488 01:26:24.445 }, 01:26:24.445 { 01:26:24.445 "name": "pt4", 01:26:24.445 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:24.445 "is_configured": true, 01:26:24.445 "data_offset": 2048, 01:26:24.445 "data_size": 63488 01:26:24.445 } 01:26:24.445 ] 01:26:24.445 } 01:26:24.445 } 01:26:24.445 }' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:26:24.445 pt2 01:26:24.445 pt3 01:26:24.445 pt4' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.445 05:21:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.445 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.703 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 [2024-12-09 05:21:16.160489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d6b7ca45-8a5d-4e02-a931-1d2971f0251c 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d6b7ca45-8a5d-4e02-a931-1d2971f0251c ']' 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 [2024-12-09 05:21:16.208097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:24.704 [2024-12-09 05:21:16.208264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:24.704 [2024-12-09 05:21:16.208496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:24.704 [2024-12-09 05:21:16.208728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:24.704 [2024-12-09 05:21:16.208881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.704 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.962 [2024-12-09 05:21:16.368147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:26:24.962 [2024-12-09 05:21:16.370931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:26:24.962 [2024-12-09 05:21:16.371157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:26:24.962 [2024-12-09 05:21:16.371235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 01:26:24.962 [2024-12-09 05:21:16.371318] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:26:24.962 [2024-12-09 05:21:16.371422] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:26:24.962 [2024-12-09 05:21:16.371460] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:26:24.962 [2024-12-09 05:21:16.371494] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 01:26:24.962 [2024-12-09 05:21:16.371518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:24.962 [2024-12-09 05:21:16.371536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:26:24.962 request: 01:26:24.962 { 01:26:24.962 "name": "raid_bdev1", 01:26:24.962 "raid_level": "raid1", 01:26:24.962 "base_bdevs": [ 01:26:24.962 "malloc1", 01:26:24.962 "malloc2", 01:26:24.962 "malloc3", 01:26:24.962 "malloc4" 01:26:24.962 ], 01:26:24.962 "superblock": false, 01:26:24.962 "method": "bdev_raid_create", 01:26:24.962 "req_id": 1 01:26:24.962 } 01:26:24.962 Got JSON-RPC error response 01:26:24.962 response: 01:26:24.962 { 01:26:24.962 "code": -17, 01:26:24.962 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:26:24.962 } 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.962 [2024-12-09 05:21:16.436238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:26:24.962 [2024-12-09 05:21:16.436333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:24.962 [2024-12-09 05:21:16.436375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:26:24.962 [2024-12-09 05:21:16.436397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:24.962 [2024-12-09 05:21:16.439377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:24.962 [2024-12-09 05:21:16.439593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:26:24.962 [2024-12-09 05:21:16.439714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:26:24.962 [2024-12-09 05:21:16.439822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:26:24.962 pt1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:24.962 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:24.962 "name": "raid_bdev1", 01:26:24.962 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:24.962 "strip_size_kb": 0, 01:26:24.962 "state": "configuring", 01:26:24.962 "raid_level": "raid1", 01:26:24.962 "superblock": true, 01:26:24.962 "num_base_bdevs": 4, 01:26:24.962 "num_base_bdevs_discovered": 1, 01:26:24.962 "num_base_bdevs_operational": 4, 01:26:24.962 "base_bdevs_list": [ 01:26:24.962 { 01:26:24.962 "name": "pt1", 01:26:24.962 "uuid": "00000000-0000-0000-0000-000000000001", 01:26:24.962 "is_configured": true, 01:26:24.962 "data_offset": 2048, 01:26:24.962 "data_size": 63488 01:26:24.962 }, 01:26:24.962 { 01:26:24.962 "name": null, 01:26:24.962 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:24.962 "is_configured": false, 01:26:24.962 "data_offset": 2048, 01:26:24.962 "data_size": 63488 01:26:24.962 }, 01:26:24.962 { 01:26:24.963 "name": null, 01:26:24.963 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:24.963 "is_configured": false, 01:26:24.963 "data_offset": 2048, 01:26:24.963 "data_size": 63488 01:26:24.963 }, 01:26:24.963 { 01:26:24.963 "name": null, 01:26:24.963 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:24.963 "is_configured": false, 01:26:24.963 "data_offset": 2048, 01:26:24.963 "data_size": 63488 01:26:24.963 } 01:26:24.963 ] 01:26:24.963 }' 01:26:24.963 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:24.963 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:25.527 [2024-12-09 05:21:16.964529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:26:25.527 [2024-12-09 05:21:16.964662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:25.527 [2024-12-09 05:21:16.964717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:26:25.527 [2024-12-09 05:21:16.964752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:25.527 [2024-12-09 05:21:16.965605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:25.527 [2024-12-09 05:21:16.965674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:26:25.527 [2024-12-09 05:21:16.965840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:26:25.527 [2024-12-09 05:21:16.965899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:26:25.527 pt2 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:25.527 [2024-12-09 05:21:16.972514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:25.527 05:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:25.527 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:25.527 "name": "raid_bdev1", 01:26:25.527 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:25.527 "strip_size_kb": 0, 01:26:25.527 "state": "configuring", 01:26:25.527 "raid_level": "raid1", 01:26:25.527 "superblock": true, 01:26:25.527 "num_base_bdevs": 4, 01:26:25.527 "num_base_bdevs_discovered": 1, 01:26:25.527 "num_base_bdevs_operational": 4, 01:26:25.527 "base_bdevs_list": [ 01:26:25.527 { 01:26:25.527 "name": "pt1", 01:26:25.527 "uuid": "00000000-0000-0000-0000-000000000001", 01:26:25.527 "is_configured": true, 01:26:25.527 "data_offset": 2048, 01:26:25.527 "data_size": 63488 01:26:25.527 }, 01:26:25.527 { 01:26:25.527 "name": null, 01:26:25.527 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:25.527 "is_configured": false, 01:26:25.527 "data_offset": 0, 01:26:25.527 "data_size": 63488 01:26:25.527 }, 01:26:25.527 { 01:26:25.527 "name": null, 01:26:25.528 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:25.528 "is_configured": false, 01:26:25.528 "data_offset": 2048, 01:26:25.528 "data_size": 63488 01:26:25.528 }, 01:26:25.528 { 01:26:25.528 "name": null, 01:26:25.528 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:25.528 "is_configured": false, 01:26:25.528 "data_offset": 2048, 01:26:25.528 "data_size": 63488 01:26:25.528 } 01:26:25.528 ] 01:26:25.528 }' 01:26:25.528 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:25.528 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.094 [2024-12-09 05:21:17.504610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:26:26.094 [2024-12-09 05:21:17.504699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:26.094 [2024-12-09 05:21:17.504734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:26:26.094 [2024-12-09 05:21:17.504751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:26.094 [2024-12-09 05:21:17.505343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:26.094 [2024-12-09 05:21:17.505406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:26:26.094 [2024-12-09 05:21:17.505555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:26:26.094 [2024-12-09 05:21:17.505601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:26:26.094 pt2 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.094 [2024-12-09 05:21:17.512574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:26:26.094 [2024-12-09 05:21:17.512635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:26.094 [2024-12-09 05:21:17.512666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:26:26.094 [2024-12-09 05:21:17.512680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:26.094 [2024-12-09 05:21:17.513135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:26.094 [2024-12-09 05:21:17.513168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:26:26.094 [2024-12-09 05:21:17.513252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:26:26.094 [2024-12-09 05:21:17.513282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:26:26.094 pt3 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.094 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.094 [2024-12-09 05:21:17.520550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:26:26.094 [2024-12-09 05:21:17.520605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:26.094 [2024-12-09 05:21:17.520634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:26:26.094 [2024-12-09 05:21:17.520649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:26.094 [2024-12-09 05:21:17.521123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:26.094 [2024-12-09 05:21:17.521154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:26:26.095 [2024-12-09 05:21:17.521253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:26:26.095 [2024-12-09 05:21:17.521290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:26:26.095 [2024-12-09 05:21:17.521506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:26:26.095 [2024-12-09 05:21:17.521531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:26.095 [2024-12-09 05:21:17.521847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:26:26.095 [2024-12-09 05:21:17.522050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:26:26.095 [2024-12-09 05:21:17.522071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:26:26.095 [2024-12-09 05:21:17.522235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:26.095 pt4 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:26.095 "name": "raid_bdev1", 01:26:26.095 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:26.095 "strip_size_kb": 0, 01:26:26.095 "state": "online", 01:26:26.095 "raid_level": "raid1", 01:26:26.095 "superblock": true, 01:26:26.095 "num_base_bdevs": 4, 01:26:26.095 "num_base_bdevs_discovered": 4, 01:26:26.095 "num_base_bdevs_operational": 4, 01:26:26.095 "base_bdevs_list": [ 01:26:26.095 { 01:26:26.095 "name": "pt1", 01:26:26.095 "uuid": "00000000-0000-0000-0000-000000000001", 01:26:26.095 "is_configured": true, 01:26:26.095 "data_offset": 2048, 01:26:26.095 "data_size": 63488 01:26:26.095 }, 01:26:26.095 { 01:26:26.095 "name": "pt2", 01:26:26.095 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:26.095 "is_configured": true, 01:26:26.095 "data_offset": 2048, 01:26:26.095 "data_size": 63488 01:26:26.095 }, 01:26:26.095 { 01:26:26.095 "name": "pt3", 01:26:26.095 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:26.095 "is_configured": true, 01:26:26.095 "data_offset": 2048, 01:26:26.095 "data_size": 63488 01:26:26.095 }, 01:26:26.095 { 01:26:26.095 "name": "pt4", 01:26:26.095 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:26.095 "is_configured": true, 01:26:26.095 "data_offset": 2048, 01:26:26.095 "data_size": 63488 01:26:26.095 } 01:26:26.095 ] 01:26:26.095 }' 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:26.095 05:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.661 [2024-12-09 05:21:18.053186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:26:26.661 "name": "raid_bdev1", 01:26:26.661 "aliases": [ 01:26:26.661 "d6b7ca45-8a5d-4e02-a931-1d2971f0251c" 01:26:26.661 ], 01:26:26.661 "product_name": "Raid Volume", 01:26:26.661 "block_size": 512, 01:26:26.661 "num_blocks": 63488, 01:26:26.661 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:26.661 "assigned_rate_limits": { 01:26:26.661 "rw_ios_per_sec": 0, 01:26:26.661 "rw_mbytes_per_sec": 0, 01:26:26.661 "r_mbytes_per_sec": 0, 01:26:26.661 "w_mbytes_per_sec": 0 01:26:26.661 }, 01:26:26.661 "claimed": false, 01:26:26.661 "zoned": false, 01:26:26.661 "supported_io_types": { 01:26:26.661 "read": true, 01:26:26.661 "write": true, 01:26:26.661 "unmap": false, 01:26:26.661 "flush": false, 01:26:26.661 "reset": true, 01:26:26.661 "nvme_admin": false, 01:26:26.661 "nvme_io": false, 01:26:26.661 "nvme_io_md": false, 01:26:26.661 "write_zeroes": true, 01:26:26.661 "zcopy": false, 01:26:26.661 "get_zone_info": false, 01:26:26.661 "zone_management": false, 01:26:26.661 "zone_append": false, 01:26:26.661 "compare": false, 01:26:26.661 "compare_and_write": false, 01:26:26.661 "abort": false, 01:26:26.661 "seek_hole": false, 01:26:26.661 "seek_data": false, 01:26:26.661 "copy": false, 01:26:26.661 "nvme_iov_md": false 01:26:26.661 }, 01:26:26.661 "memory_domains": [ 01:26:26.661 { 01:26:26.661 "dma_device_id": "system", 01:26:26.661 "dma_device_type": 1 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:26.661 "dma_device_type": 2 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "system", 01:26:26.661 "dma_device_type": 1 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:26.661 "dma_device_type": 2 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "system", 01:26:26.661 "dma_device_type": 1 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:26.661 "dma_device_type": 2 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "system", 01:26:26.661 "dma_device_type": 1 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:26:26.661 "dma_device_type": 2 01:26:26.661 } 01:26:26.661 ], 01:26:26.661 "driver_specific": { 01:26:26.661 "raid": { 01:26:26.661 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:26.661 "strip_size_kb": 0, 01:26:26.661 "state": "online", 01:26:26.661 "raid_level": "raid1", 01:26:26.661 "superblock": true, 01:26:26.661 "num_base_bdevs": 4, 01:26:26.661 "num_base_bdevs_discovered": 4, 01:26:26.661 "num_base_bdevs_operational": 4, 01:26:26.661 "base_bdevs_list": [ 01:26:26.661 { 01:26:26.661 "name": "pt1", 01:26:26.661 "uuid": "00000000-0000-0000-0000-000000000001", 01:26:26.661 "is_configured": true, 01:26:26.661 "data_offset": 2048, 01:26:26.661 "data_size": 63488 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "name": "pt2", 01:26:26.661 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:26.661 "is_configured": true, 01:26:26.661 "data_offset": 2048, 01:26:26.661 "data_size": 63488 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "name": "pt3", 01:26:26.661 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:26.661 "is_configured": true, 01:26:26.661 "data_offset": 2048, 01:26:26.661 "data_size": 63488 01:26:26.661 }, 01:26:26.661 { 01:26:26.661 "name": "pt4", 01:26:26.661 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:26.661 "is_configured": true, 01:26:26.661 "data_offset": 2048, 01:26:26.661 "data_size": 63488 01:26:26.661 } 01:26:26.661 ] 01:26:26.661 } 01:26:26.661 } 01:26:26.661 }' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:26:26.661 pt2 01:26:26.661 pt3 01:26:26.661 pt4' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.661 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:26:26.919 [2024-12-09 05:21:18.417160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d6b7ca45-8a5d-4e02-a931-1d2971f0251c '!=' d6b7ca45-8a5d-4e02-a931-1d2971f0251c ']' 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.919 [2024-12-09 05:21:18.468921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:26:26.919 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:26.920 "name": "raid_bdev1", 01:26:26.920 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:26.920 "strip_size_kb": 0, 01:26:26.920 "state": "online", 01:26:26.920 "raid_level": "raid1", 01:26:26.920 "superblock": true, 01:26:26.920 "num_base_bdevs": 4, 01:26:26.920 "num_base_bdevs_discovered": 3, 01:26:26.920 "num_base_bdevs_operational": 3, 01:26:26.920 "base_bdevs_list": [ 01:26:26.920 { 01:26:26.920 "name": null, 01:26:26.920 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:26.920 "is_configured": false, 01:26:26.920 "data_offset": 0, 01:26:26.920 "data_size": 63488 01:26:26.920 }, 01:26:26.920 { 01:26:26.920 "name": "pt2", 01:26:26.920 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:26.920 "is_configured": true, 01:26:26.920 "data_offset": 2048, 01:26:26.920 "data_size": 63488 01:26:26.920 }, 01:26:26.920 { 01:26:26.920 "name": "pt3", 01:26:26.920 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:26.920 "is_configured": true, 01:26:26.920 "data_offset": 2048, 01:26:26.920 "data_size": 63488 01:26:26.920 }, 01:26:26.920 { 01:26:26.920 "name": "pt4", 01:26:26.920 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:26.920 "is_configured": true, 01:26:26.920 "data_offset": 2048, 01:26:26.920 "data_size": 63488 01:26:26.920 } 01:26:26.920 ] 01:26:26.920 }' 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:26.920 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 05:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:26:27.486 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.486 05:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 [2024-12-09 05:21:19.001092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:27.486 [2024-12-09 05:21:19.002383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:27.486 [2024-12-09 05:21:19.002629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:27.486 [2024-12-09 05:21:19.002771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:27.486 [2024-12-09 05:21:19.002796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.486 [2024-12-09 05:21:19.093048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:26:27.486 [2024-12-09 05:21:19.093289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:27.486 [2024-12-09 05:21:19.093342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 01:26:27.486 [2024-12-09 05:21:19.093397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:27.486 [2024-12-09 05:21:19.096974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:27.486 [2024-12-09 05:21:19.097031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:26:27.486 [2024-12-09 05:21:19.097166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:26:27.486 [2024-12-09 05:21:19.097243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:26:27.486 pt2 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:27.486 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:27.487 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:27.744 "name": "raid_bdev1", 01:26:27.744 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:27.744 "strip_size_kb": 0, 01:26:27.744 "state": "configuring", 01:26:27.744 "raid_level": "raid1", 01:26:27.744 "superblock": true, 01:26:27.744 "num_base_bdevs": 4, 01:26:27.744 "num_base_bdevs_discovered": 1, 01:26:27.744 "num_base_bdevs_operational": 3, 01:26:27.744 "base_bdevs_list": [ 01:26:27.744 { 01:26:27.744 "name": null, 01:26:27.744 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:27.744 "is_configured": false, 01:26:27.744 "data_offset": 2048, 01:26:27.744 "data_size": 63488 01:26:27.744 }, 01:26:27.744 { 01:26:27.744 "name": "pt2", 01:26:27.744 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:27.744 "is_configured": true, 01:26:27.744 "data_offset": 2048, 01:26:27.744 "data_size": 63488 01:26:27.744 }, 01:26:27.744 { 01:26:27.744 "name": null, 01:26:27.744 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:27.744 "is_configured": false, 01:26:27.744 "data_offset": 2048, 01:26:27.744 "data_size": 63488 01:26:27.744 }, 01:26:27.744 { 01:26:27.744 "name": null, 01:26:27.744 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:27.744 "is_configured": false, 01:26:27.744 "data_offset": 2048, 01:26:27.744 "data_size": 63488 01:26:27.744 } 01:26:27.744 ] 01:26:27.744 }' 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:27.744 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:28.311 [2024-12-09 05:21:19.629566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:26:28.311 [2024-12-09 05:21:19.629720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:28.311 [2024-12-09 05:21:19.629771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 01:26:28.311 [2024-12-09 05:21:19.629793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:28.311 [2024-12-09 05:21:19.630642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:28.311 [2024-12-09 05:21:19.630957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:26:28.311 [2024-12-09 05:21:19.631145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:26:28.311 [2024-12-09 05:21:19.631192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:26:28.311 pt3 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.311 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:28.311 "name": "raid_bdev1", 01:26:28.311 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:28.311 "strip_size_kb": 0, 01:26:28.311 "state": "configuring", 01:26:28.311 "raid_level": "raid1", 01:26:28.311 "superblock": true, 01:26:28.311 "num_base_bdevs": 4, 01:26:28.311 "num_base_bdevs_discovered": 2, 01:26:28.311 "num_base_bdevs_operational": 3, 01:26:28.312 "base_bdevs_list": [ 01:26:28.312 { 01:26:28.312 "name": null, 01:26:28.312 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:28.312 "is_configured": false, 01:26:28.312 "data_offset": 2048, 01:26:28.312 "data_size": 63488 01:26:28.312 }, 01:26:28.312 { 01:26:28.312 "name": "pt2", 01:26:28.312 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:28.312 "is_configured": true, 01:26:28.312 "data_offset": 2048, 01:26:28.312 "data_size": 63488 01:26:28.312 }, 01:26:28.312 { 01:26:28.312 "name": "pt3", 01:26:28.312 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:28.312 "is_configured": true, 01:26:28.312 "data_offset": 2048, 01:26:28.312 "data_size": 63488 01:26:28.312 }, 01:26:28.312 { 01:26:28.312 "name": null, 01:26:28.312 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:28.312 "is_configured": false, 01:26:28.312 "data_offset": 2048, 01:26:28.312 "data_size": 63488 01:26:28.312 } 01:26:28.312 ] 01:26:28.312 }' 01:26:28.312 05:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:28.312 05:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:28.571 [2024-12-09 05:21:20.145848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:26:28.571 [2024-12-09 05:21:20.146426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:28.571 [2024-12-09 05:21:20.146660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 01:26:28.571 [2024-12-09 05:21:20.146696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:28.571 [2024-12-09 05:21:20.147750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:28.571 [2024-12-09 05:21:20.147897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:26:28.571 [2024-12-09 05:21:20.148229] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:26:28.571 [2024-12-09 05:21:20.148433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:26:28.571 [2024-12-09 05:21:20.148763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:26:28.571 [2024-12-09 05:21:20.148915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:28.571 [2024-12-09 05:21:20.149274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:26:28.571 [2024-12-09 05:21:20.149570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:26:28.571 [2024-12-09 05:21:20.149598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:26:28.571 [2024-12-09 05:21:20.149944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:28.571 pt4 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:28.571 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.829 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:28.829 "name": "raid_bdev1", 01:26:28.829 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:28.830 "strip_size_kb": 0, 01:26:28.830 "state": "online", 01:26:28.830 "raid_level": "raid1", 01:26:28.830 "superblock": true, 01:26:28.830 "num_base_bdevs": 4, 01:26:28.830 "num_base_bdevs_discovered": 3, 01:26:28.830 "num_base_bdevs_operational": 3, 01:26:28.830 "base_bdevs_list": [ 01:26:28.830 { 01:26:28.830 "name": null, 01:26:28.830 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:28.830 "is_configured": false, 01:26:28.830 "data_offset": 2048, 01:26:28.830 "data_size": 63488 01:26:28.830 }, 01:26:28.830 { 01:26:28.830 "name": "pt2", 01:26:28.830 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:28.830 "is_configured": true, 01:26:28.830 "data_offset": 2048, 01:26:28.830 "data_size": 63488 01:26:28.830 }, 01:26:28.830 { 01:26:28.830 "name": "pt3", 01:26:28.830 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:28.830 "is_configured": true, 01:26:28.830 "data_offset": 2048, 01:26:28.830 "data_size": 63488 01:26:28.830 }, 01:26:28.830 { 01:26:28.830 "name": "pt4", 01:26:28.830 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:28.830 "is_configured": true, 01:26:28.830 "data_offset": 2048, 01:26:28.830 "data_size": 63488 01:26:28.830 } 01:26:28.830 ] 01:26:28.830 }' 01:26:28.830 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:28.830 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.088 [2024-12-09 05:21:20.677978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:29.088 [2024-12-09 05:21:20.678014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:29.088 [2024-12-09 05:21:20.678124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:29.088 [2024-12-09 05:21:20.678289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:29.088 [2024-12-09 05:21:20.678316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:26:29.088 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.347 [2024-12-09 05:21:20.753951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:26:29.347 [2024-12-09 05:21:20.754049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:29.347 [2024-12-09 05:21:20.754078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 01:26:29.347 [2024-12-09 05:21:20.754099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:29.347 [2024-12-09 05:21:20.757211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:29.347 [2024-12-09 05:21:20.757457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:26:29.347 [2024-12-09 05:21:20.757612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:26:29.347 [2024-12-09 05:21:20.757684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:26:29.347 [2024-12-09 05:21:20.757878] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:26:29.347 [2024-12-09 05:21:20.757941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:29.347 [2024-12-09 05:21:20.757965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:26:29.347 [2024-12-09 05:21:20.758057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:26:29.347 [2024-12-09 05:21:20.758270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:26:29.347 pt1 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:29.347 "name": "raid_bdev1", 01:26:29.347 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:29.347 "strip_size_kb": 0, 01:26:29.347 "state": "configuring", 01:26:29.347 "raid_level": "raid1", 01:26:29.347 "superblock": true, 01:26:29.347 "num_base_bdevs": 4, 01:26:29.347 "num_base_bdevs_discovered": 2, 01:26:29.347 "num_base_bdevs_operational": 3, 01:26:29.347 "base_bdevs_list": [ 01:26:29.347 { 01:26:29.347 "name": null, 01:26:29.347 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:29.347 "is_configured": false, 01:26:29.347 "data_offset": 2048, 01:26:29.347 "data_size": 63488 01:26:29.347 }, 01:26:29.347 { 01:26:29.347 "name": "pt2", 01:26:29.347 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:29.347 "is_configured": true, 01:26:29.347 "data_offset": 2048, 01:26:29.347 "data_size": 63488 01:26:29.347 }, 01:26:29.347 { 01:26:29.347 "name": "pt3", 01:26:29.347 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:29.347 "is_configured": true, 01:26:29.347 "data_offset": 2048, 01:26:29.347 "data_size": 63488 01:26:29.347 }, 01:26:29.347 { 01:26:29.347 "name": null, 01:26:29.347 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:29.347 "is_configured": false, 01:26:29.347 "data_offset": 2048, 01:26:29.347 "data_size": 63488 01:26:29.347 } 01:26:29.347 ] 01:26:29.347 }' 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:29.347 05:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.917 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.918 [2024-12-09 05:21:21.298183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:26:29.918 [2024-12-09 05:21:21.298285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:29.918 [2024-12-09 05:21:21.298323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 01:26:29.918 [2024-12-09 05:21:21.298339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:29.918 [2024-12-09 05:21:21.298969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:29.918 [2024-12-09 05:21:21.299004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:26:29.918 [2024-12-09 05:21:21.299132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:26:29.918 [2024-12-09 05:21:21.299166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:26:29.918 [2024-12-09 05:21:21.299386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:26:29.918 [2024-12-09 05:21:21.299428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:29.918 [2024-12-09 05:21:21.299756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:26:29.918 [2024-12-09 05:21:21.299982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:26:29.918 [2024-12-09 05:21:21.300003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:26:29.918 [2024-12-09 05:21:21.300191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:29.918 pt4 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:29.918 "name": "raid_bdev1", 01:26:29.918 "uuid": "d6b7ca45-8a5d-4e02-a931-1d2971f0251c", 01:26:29.918 "strip_size_kb": 0, 01:26:29.918 "state": "online", 01:26:29.918 "raid_level": "raid1", 01:26:29.918 "superblock": true, 01:26:29.918 "num_base_bdevs": 4, 01:26:29.918 "num_base_bdevs_discovered": 3, 01:26:29.918 "num_base_bdevs_operational": 3, 01:26:29.918 "base_bdevs_list": [ 01:26:29.918 { 01:26:29.918 "name": null, 01:26:29.918 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:29.918 "is_configured": false, 01:26:29.918 "data_offset": 2048, 01:26:29.918 "data_size": 63488 01:26:29.918 }, 01:26:29.918 { 01:26:29.918 "name": "pt2", 01:26:29.918 "uuid": "00000000-0000-0000-0000-000000000002", 01:26:29.918 "is_configured": true, 01:26:29.918 "data_offset": 2048, 01:26:29.918 "data_size": 63488 01:26:29.918 }, 01:26:29.918 { 01:26:29.918 "name": "pt3", 01:26:29.918 "uuid": "00000000-0000-0000-0000-000000000003", 01:26:29.918 "is_configured": true, 01:26:29.918 "data_offset": 2048, 01:26:29.918 "data_size": 63488 01:26:29.918 }, 01:26:29.918 { 01:26:29.918 "name": "pt4", 01:26:29.918 "uuid": "00000000-0000-0000-0000-000000000004", 01:26:29.918 "is_configured": true, 01:26:29.918 "data_offset": 2048, 01:26:29.918 "data_size": 63488 01:26:29.918 } 01:26:29.918 ] 01:26:29.918 }' 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:29.918 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:30.492 [2024-12-09 05:21:21.866721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d6b7ca45-8a5d-4e02-a931-1d2971f0251c '!=' d6b7ca45-8a5d-4e02-a931-1d2971f0251c ']' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74624 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74624 ']' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74624 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74624 01:26:30.492 killing process with pid 74624 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74624' 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74624 01:26:30.492 [2024-12-09 05:21:21.939207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:26:30.492 05:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74624 01:26:30.492 [2024-12-09 05:21:21.939321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:30.492 [2024-12-09 05:21:21.939459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:30.492 [2024-12-09 05:21:21.939484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:26:30.750 [2024-12-09 05:21:22.303945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:26:32.126 05:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:26:32.126 01:26:32.126 real 0m9.609s 01:26:32.126 user 0m15.656s 01:26:32.126 sys 0m1.447s 01:26:32.126 05:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:32.126 05:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:26:32.126 ************************************ 01:26:32.126 END TEST raid_superblock_test 01:26:32.126 ************************************ 01:26:32.126 05:21:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 01:26:32.126 05:21:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:26:32.126 05:21:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:32.126 05:21:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:26:32.126 ************************************ 01:26:32.126 START TEST raid_read_error_test 01:26:32.126 ************************************ 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BdHuTZy0f2 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75122 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75122 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75122 ']' 01:26:32.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:32.126 05:21:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:32.126 [2024-12-09 05:21:23.619659] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:32.126 [2024-12-09 05:21:23.620165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75122 ] 01:26:32.384 [2024-12-09 05:21:23.806446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:32.384 [2024-12-09 05:21:23.930567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:32.643 [2024-12-09 05:21:24.123892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:32.643 [2024-12-09 05:21:24.123972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 BaseBdev1_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 true 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 [2024-12-09 05:21:24.618018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:26:33.210 [2024-12-09 05:21:24.618103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:33.210 [2024-12-09 05:21:24.618132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:26:33.210 [2024-12-09 05:21:24.618150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:33.210 [2024-12-09 05:21:24.621104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:33.210 [2024-12-09 05:21:24.621170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:26:33.210 BaseBdev1 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 BaseBdev2_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 true 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 [2024-12-09 05:21:24.677022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:26:33.210 [2024-12-09 05:21:24.677109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:33.210 [2024-12-09 05:21:24.677135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:26:33.210 [2024-12-09 05:21:24.677152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:33.210 [2024-12-09 05:21:24.680013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:33.210 [2024-12-09 05:21:24.680075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:26:33.210 BaseBdev2 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 BaseBdev3_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 true 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.210 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.210 [2024-12-09 05:21:24.749586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:26:33.210 [2024-12-09 05:21:24.749654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:33.210 [2024-12-09 05:21:24.749682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:26:33.210 [2024-12-09 05:21:24.749700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:33.210 [2024-12-09 05:21:24.752540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:33.211 [2024-12-09 05:21:24.752732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:26:33.211 BaseBdev3 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.211 BaseBdev4_malloc 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.211 true 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.211 [2024-12-09 05:21:24.807683] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 01:26:33.211 [2024-12-09 05:21:24.807780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:33.211 [2024-12-09 05:21:24.807821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:26:33.211 [2024-12-09 05:21:24.807855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:33.211 [2024-12-09 05:21:24.810804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:33.211 [2024-12-09 05:21:24.810869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:26:33.211 BaseBdev4 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.211 [2024-12-09 05:21:24.815849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:33.211 [2024-12-09 05:21:24.818316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:33.211 [2024-12-09 05:21:24.818449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:33.211 [2024-12-09 05:21:24.818543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:26:33.211 [2024-12-09 05:21:24.818831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 01:26:33.211 [2024-12-09 05:21:24.818852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:33.211 [2024-12-09 05:21:24.819120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 01:26:33.211 [2024-12-09 05:21:24.819331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 01:26:33.211 [2024-12-09 05:21:24.819346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 01:26:33.211 [2024-12-09 05:21:24.819554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:33.211 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:33.469 "name": "raid_bdev1", 01:26:33.469 "uuid": "ce67a753-c524-4b61-b78d-7a3bf59704c3", 01:26:33.469 "strip_size_kb": 0, 01:26:33.469 "state": "online", 01:26:33.469 "raid_level": "raid1", 01:26:33.469 "superblock": true, 01:26:33.469 "num_base_bdevs": 4, 01:26:33.469 "num_base_bdevs_discovered": 4, 01:26:33.469 "num_base_bdevs_operational": 4, 01:26:33.469 "base_bdevs_list": [ 01:26:33.469 { 01:26:33.469 "name": "BaseBdev1", 01:26:33.469 "uuid": "f8491e7b-87ea-5782-aef3-ecb45ca0498e", 01:26:33.469 "is_configured": true, 01:26:33.469 "data_offset": 2048, 01:26:33.469 "data_size": 63488 01:26:33.469 }, 01:26:33.469 { 01:26:33.469 "name": "BaseBdev2", 01:26:33.469 "uuid": "d4254aa3-2eae-5b04-89f2-c7c9ca3ba5a0", 01:26:33.469 "is_configured": true, 01:26:33.469 "data_offset": 2048, 01:26:33.469 "data_size": 63488 01:26:33.469 }, 01:26:33.469 { 01:26:33.469 "name": "BaseBdev3", 01:26:33.469 "uuid": "7386294c-b4f9-564e-8b85-5c5479cae125", 01:26:33.469 "is_configured": true, 01:26:33.469 "data_offset": 2048, 01:26:33.469 "data_size": 63488 01:26:33.469 }, 01:26:33.469 { 01:26:33.469 "name": "BaseBdev4", 01:26:33.469 "uuid": "b1eefccf-779f-54bb-87e8-687688a5e44b", 01:26:33.469 "is_configured": true, 01:26:33.469 "data_offset": 2048, 01:26:33.469 "data_size": 63488 01:26:33.469 } 01:26:33.469 ] 01:26:33.469 }' 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:33.469 05:21:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:33.727 05:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:26:33.727 05:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:26:33.985 [2024-12-09 05:21:25.465494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:34.918 "name": "raid_bdev1", 01:26:34.918 "uuid": "ce67a753-c524-4b61-b78d-7a3bf59704c3", 01:26:34.918 "strip_size_kb": 0, 01:26:34.918 "state": "online", 01:26:34.918 "raid_level": "raid1", 01:26:34.918 "superblock": true, 01:26:34.918 "num_base_bdevs": 4, 01:26:34.918 "num_base_bdevs_discovered": 4, 01:26:34.918 "num_base_bdevs_operational": 4, 01:26:34.918 "base_bdevs_list": [ 01:26:34.918 { 01:26:34.918 "name": "BaseBdev1", 01:26:34.918 "uuid": "f8491e7b-87ea-5782-aef3-ecb45ca0498e", 01:26:34.918 "is_configured": true, 01:26:34.918 "data_offset": 2048, 01:26:34.918 "data_size": 63488 01:26:34.918 }, 01:26:34.918 { 01:26:34.918 "name": "BaseBdev2", 01:26:34.918 "uuid": "d4254aa3-2eae-5b04-89f2-c7c9ca3ba5a0", 01:26:34.918 "is_configured": true, 01:26:34.918 "data_offset": 2048, 01:26:34.918 "data_size": 63488 01:26:34.918 }, 01:26:34.918 { 01:26:34.918 "name": "BaseBdev3", 01:26:34.918 "uuid": "7386294c-b4f9-564e-8b85-5c5479cae125", 01:26:34.918 "is_configured": true, 01:26:34.918 "data_offset": 2048, 01:26:34.918 "data_size": 63488 01:26:34.918 }, 01:26:34.918 { 01:26:34.918 "name": "BaseBdev4", 01:26:34.918 "uuid": "b1eefccf-779f-54bb-87e8-687688a5e44b", 01:26:34.918 "is_configured": true, 01:26:34.918 "data_offset": 2048, 01:26:34.918 "data_size": 63488 01:26:34.918 } 01:26:34.918 ] 01:26:34.918 }' 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:34.918 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:35.483 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:26:35.483 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.483 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:35.483 [2024-12-09 05:21:26.888555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:35.483 [2024-12-09 05:21:26.888741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:35.483 [2024-12-09 05:21:26.892915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:35.483 [2024-12-09 05:21:26.893083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:35.483 [2024-12-09 05:21:26.893237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:35.483 [2024-12-09 05:21:26.893273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 01:26:35.483 { 01:26:35.483 "results": [ 01:26:35.483 { 01:26:35.483 "job": "raid_bdev1", 01:26:35.483 "core_mask": "0x1", 01:26:35.483 "workload": "randrw", 01:26:35.483 "percentage": 50, 01:26:35.483 "status": "finished", 01:26:35.483 "queue_depth": 1, 01:26:35.483 "io_size": 131072, 01:26:35.484 "runtime": 1.420848, 01:26:35.484 "iops": 7090.1320901320905, 01:26:35.484 "mibps": 886.2665112665113, 01:26:35.484 "io_failed": 0, 01:26:35.484 "io_timeout": 0, 01:26:35.484 "avg_latency_us": 136.7249741007454, 01:26:35.484 "min_latency_us": 37.93454545454546, 01:26:35.484 "max_latency_us": 2025.658181818182 01:26:35.484 } 01:26:35.484 ], 01:26:35.484 "core_count": 1 01:26:35.484 } 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75122 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75122 ']' 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75122 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75122 01:26:35.484 killing process with pid 75122 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75122' 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75122 01:26:35.484 [2024-12-09 05:21:26.932657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:26:35.484 05:21:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75122 01:26:35.741 [2024-12-09 05:21:27.199953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BdHuTZy0f2 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 01:26:37.127 01:26:37.127 real 0m4.848s 01:26:37.127 user 0m5.942s 01:26:37.127 sys 0m0.615s 01:26:37.127 ************************************ 01:26:37.127 END TEST raid_read_error_test 01:26:37.127 ************************************ 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:37.127 05:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:37.127 05:21:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 01:26:37.127 05:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:26:37.127 05:21:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:37.127 05:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:26:37.127 ************************************ 01:26:37.127 START TEST raid_write_error_test 01:26:37.127 ************************************ 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U903VNs8X5 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75268 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75268 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75268 ']' 01:26:37.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:37.127 05:21:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:37.127 [2024-12-09 05:21:28.526430] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:37.127 [2024-12-09 05:21:28.526887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75268 ] 01:26:37.127 [2024-12-09 05:21:28.710398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:37.398 [2024-12-09 05:21:28.842996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:37.656 [2024-12-09 05:21:29.037224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:37.656 [2024-12-09 05:21:29.037536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:37.914 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:37.914 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 01:26:37.914 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:37.914 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:26:37.914 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.914 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.173 BaseBdev1_malloc 01:26:38.173 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.173 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 true 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 [2024-12-09 05:21:29.556809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 01:26:38.174 [2024-12-09 05:21:29.556896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:38.174 [2024-12-09 05:21:29.556924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 01:26:38.174 [2024-12-09 05:21:29.556941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:38.174 [2024-12-09 05:21:29.559677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:38.174 [2024-12-09 05:21:29.559755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:26:38.174 BaseBdev1 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 BaseBdev2_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 true 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 [2024-12-09 05:21:29.621881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 01:26:38.174 [2024-12-09 05:21:29.621965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:38.174 [2024-12-09 05:21:29.621991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 01:26:38.174 [2024-12-09 05:21:29.622007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:38.174 [2024-12-09 05:21:29.624794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:38.174 [2024-12-09 05:21:29.624856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:26:38.174 BaseBdev2 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 BaseBdev3_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 true 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 [2024-12-09 05:21:29.698963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 01:26:38.174 [2024-12-09 05:21:29.699055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:38.174 [2024-12-09 05:21:29.699083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:26:38.174 [2024-12-09 05:21:29.699112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:38.174 [2024-12-09 05:21:29.702293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:38.174 [2024-12-09 05:21:29.702359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:26:38.174 BaseBdev3 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 BaseBdev4_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 true 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 [2024-12-09 05:21:29.763208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 01:26:38.174 [2024-12-09 05:21:29.763290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:38.174 [2024-12-09 05:21:29.763316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:26:38.174 [2024-12-09 05:21:29.763333] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:38.174 [2024-12-09 05:21:29.766208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:38.174 [2024-12-09 05:21:29.766275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:26:38.174 BaseBdev4 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.174 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.174 [2024-12-09 05:21:29.771354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:38.174 [2024-12-09 05:21:29.773913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:38.174 [2024-12-09 05:21:29.774145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:26:38.174 [2024-12-09 05:21:29.774284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:26:38.174 [2024-12-09 05:21:29.774649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 01:26:38.175 [2024-12-09 05:21:29.774725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:26:38.175 [2024-12-09 05:21:29.775079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 01:26:38.175 [2024-12-09 05:21:29.775439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 01:26:38.175 [2024-12-09 05:21:29.775561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 01:26:38.175 [2024-12-09 05:21:29.776019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.175 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:38.433 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.433 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:38.433 "name": "raid_bdev1", 01:26:38.433 "uuid": "97af0d02-3cf3-4718-b99e-eeafaaf7be9a", 01:26:38.433 "strip_size_kb": 0, 01:26:38.433 "state": "online", 01:26:38.433 "raid_level": "raid1", 01:26:38.433 "superblock": true, 01:26:38.433 "num_base_bdevs": 4, 01:26:38.433 "num_base_bdevs_discovered": 4, 01:26:38.433 "num_base_bdevs_operational": 4, 01:26:38.433 "base_bdevs_list": [ 01:26:38.433 { 01:26:38.433 "name": "BaseBdev1", 01:26:38.433 "uuid": "9be2d729-471b-54fb-9436-570695543baa", 01:26:38.433 "is_configured": true, 01:26:38.433 "data_offset": 2048, 01:26:38.433 "data_size": 63488 01:26:38.433 }, 01:26:38.433 { 01:26:38.433 "name": "BaseBdev2", 01:26:38.433 "uuid": "95e348a5-55f6-5619-87bf-ae4028757efd", 01:26:38.433 "is_configured": true, 01:26:38.433 "data_offset": 2048, 01:26:38.433 "data_size": 63488 01:26:38.433 }, 01:26:38.433 { 01:26:38.433 "name": "BaseBdev3", 01:26:38.433 "uuid": "c93c07dc-8d64-5204-b70b-e411de44ea84", 01:26:38.433 "is_configured": true, 01:26:38.433 "data_offset": 2048, 01:26:38.433 "data_size": 63488 01:26:38.433 }, 01:26:38.433 { 01:26:38.433 "name": "BaseBdev4", 01:26:38.433 "uuid": "63aa2829-e58d-5370-a110-f29f53ee5e1f", 01:26:38.433 "is_configured": true, 01:26:38.433 "data_offset": 2048, 01:26:38.433 "data_size": 63488 01:26:38.433 } 01:26:38.433 ] 01:26:38.433 }' 01:26:38.433 05:21:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:38.433 05:21:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:38.695 05:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 01:26:38.695 05:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:26:38.952 [2024-12-09 05:21:30.441602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:39.886 [2024-12-09 05:21:31.318037] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 01:26:39.886 [2024-12-09 05:21:31.318116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:26:39.886 [2024-12-09 05:21:31.318455] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:39.886 "name": "raid_bdev1", 01:26:39.886 "uuid": "97af0d02-3cf3-4718-b99e-eeafaaf7be9a", 01:26:39.886 "strip_size_kb": 0, 01:26:39.886 "state": "online", 01:26:39.886 "raid_level": "raid1", 01:26:39.886 "superblock": true, 01:26:39.886 "num_base_bdevs": 4, 01:26:39.886 "num_base_bdevs_discovered": 3, 01:26:39.886 "num_base_bdevs_operational": 3, 01:26:39.886 "base_bdevs_list": [ 01:26:39.886 { 01:26:39.886 "name": null, 01:26:39.886 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:39.886 "is_configured": false, 01:26:39.886 "data_offset": 0, 01:26:39.886 "data_size": 63488 01:26:39.886 }, 01:26:39.886 { 01:26:39.886 "name": "BaseBdev2", 01:26:39.886 "uuid": "95e348a5-55f6-5619-87bf-ae4028757efd", 01:26:39.886 "is_configured": true, 01:26:39.886 "data_offset": 2048, 01:26:39.886 "data_size": 63488 01:26:39.886 }, 01:26:39.886 { 01:26:39.886 "name": "BaseBdev3", 01:26:39.886 "uuid": "c93c07dc-8d64-5204-b70b-e411de44ea84", 01:26:39.886 "is_configured": true, 01:26:39.886 "data_offset": 2048, 01:26:39.886 "data_size": 63488 01:26:39.886 }, 01:26:39.886 { 01:26:39.886 "name": "BaseBdev4", 01:26:39.886 "uuid": "63aa2829-e58d-5370-a110-f29f53ee5e1f", 01:26:39.886 "is_configured": true, 01:26:39.886 "data_offset": 2048, 01:26:39.886 "data_size": 63488 01:26:39.886 } 01:26:39.886 ] 01:26:39.886 }' 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:39.886 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:40.452 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:26:40.452 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.452 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:40.452 [2024-12-09 05:21:31.834809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:40.452 [2024-12-09 05:21:31.834845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:40.452 [2024-12-09 05:21:31.838446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:40.452 [2024-12-09 05:21:31.838514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:40.452 [2024-12-09 05:21:31.838654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:40.452 [2024-12-09 05:21:31.838675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 01:26:40.452 { 01:26:40.452 "results": [ 01:26:40.452 { 01:26:40.452 "job": "raid_bdev1", 01:26:40.452 "core_mask": "0x1", 01:26:40.452 "workload": "randrw", 01:26:40.452 "percentage": 50, 01:26:40.452 "status": "finished", 01:26:40.452 "queue_depth": 1, 01:26:40.452 "io_size": 131072, 01:26:40.452 "runtime": 1.390981, 01:26:40.452 "iops": 7759.272053320642, 01:26:40.452 "mibps": 969.9090066650803, 01:26:40.452 "io_failed": 0, 01:26:40.452 "io_timeout": 0, 01:26:40.452 "avg_latency_us": 124.51914995409481, 01:26:40.452 "min_latency_us": 40.261818181818185, 01:26:40.452 "max_latency_us": 2040.5527272727272 01:26:40.452 } 01:26:40.452 ], 01:26:40.452 "core_count": 1 01:26:40.452 } 01:26:40.452 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.452 05:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75268 01:26:40.452 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75268 ']' 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75268 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75268 01:26:40.453 killing process with pid 75268 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75268' 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75268 01:26:40.453 [2024-12-09 05:21:31.870728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:26:40.453 05:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75268 01:26:40.711 [2024-12-09 05:21:32.135200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:26:41.646 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U903VNs8X5 01:26:41.646 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 01:26:41.646 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 01:26:41.904 01:26:41.904 real 0m4.873s 01:26:41.904 user 0m6.003s 01:26:41.904 sys 0m0.605s 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:41.904 ************************************ 01:26:41.904 END TEST raid_write_error_test 01:26:41.904 ************************************ 01:26:41.904 05:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 01:26:41.904 05:21:33 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 01:26:41.904 05:21:33 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 01:26:41.904 05:21:33 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 01:26:41.904 05:21:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:26:41.904 05:21:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:41.904 05:21:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:26:41.904 ************************************ 01:26:41.904 START TEST raid_rebuild_test 01:26:41.904 ************************************ 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 01:26:41.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75412 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75412 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75412 ']' 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:41.904 05:21:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:41.904 [2024-12-09 05:21:33.421905] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:41.904 [2024-12-09 05:21:33.422436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75412 ] 01:26:41.904 I/O size of 3145728 is greater than zero copy threshold (65536). 01:26:41.904 Zero copy mechanism will not be used. 01:26:42.162 [2024-12-09 05:21:33.596866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:42.162 [2024-12-09 05:21:33.721773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:42.419 [2024-12-09 05:21:33.916741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:42.419 [2024-12-09 05:21:33.917022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 BaseBdev1_malloc 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 [2024-12-09 05:21:34.435067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:26:42.986 [2024-12-09 05:21:34.435156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:42.986 [2024-12-09 05:21:34.435186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:26:42.986 [2024-12-09 05:21:34.435203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:42.986 [2024-12-09 05:21:34.438147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:42.986 [2024-12-09 05:21:34.438212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:26:42.986 BaseBdev1 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 BaseBdev2_malloc 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 [2024-12-09 05:21:34.485696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:26:42.986 [2024-12-09 05:21:34.486026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:42.986 [2024-12-09 05:21:34.486071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:26:42.986 [2024-12-09 05:21:34.486090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:42.986 [2024-12-09 05:21:34.489135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:42.986 [2024-12-09 05:21:34.489360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:26:42.986 BaseBdev2 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 spare_malloc 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 spare_delay 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 [2024-12-09 05:21:34.556318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:26:42.986 [2024-12-09 05:21:34.556447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:26:42.986 [2024-12-09 05:21:34.556495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:26:42.986 [2024-12-09 05:21:34.556515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:26:42.986 [2024-12-09 05:21:34.559533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:26:42.986 [2024-12-09 05:21:34.559584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:26:42.986 spare 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 [2024-12-09 05:21:34.568543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:26:42.986 [2024-12-09 05:21:34.571105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:26:42.986 [2024-12-09 05:21:34.571217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:26:42.986 [2024-12-09 05:21:34.571237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:26:42.986 [2024-12-09 05:21:34.571608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:26:42.986 [2024-12-09 05:21:34.571827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:26:42.986 [2024-12-09 05:21:34.571980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:26:42.986 [2024-12-09 05:21:34.572192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:42.986 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.245 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:43.245 "name": "raid_bdev1", 01:26:43.245 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:43.245 "strip_size_kb": 0, 01:26:43.245 "state": "online", 01:26:43.245 "raid_level": "raid1", 01:26:43.245 "superblock": false, 01:26:43.245 "num_base_bdevs": 2, 01:26:43.245 "num_base_bdevs_discovered": 2, 01:26:43.245 "num_base_bdevs_operational": 2, 01:26:43.245 "base_bdevs_list": [ 01:26:43.245 { 01:26:43.245 "name": "BaseBdev1", 01:26:43.245 "uuid": "69a791e7-124c-5f68-93ec-994ee1a10795", 01:26:43.245 "is_configured": true, 01:26:43.245 "data_offset": 0, 01:26:43.245 "data_size": 65536 01:26:43.245 }, 01:26:43.245 { 01:26:43.245 "name": "BaseBdev2", 01:26:43.245 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:43.245 "is_configured": true, 01:26:43.245 "data_offset": 0, 01:26:43.245 "data_size": 65536 01:26:43.245 } 01:26:43.245 ] 01:26:43.245 }' 01:26:43.245 05:21:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:43.245 05:21:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:43.503 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:26:43.503 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.503 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:43.503 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:26:43.503 [2024-12-09 05:21:35.089062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:26:43.503 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:26:43.761 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:26:44.054 [2024-12-09 05:21:35.464906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:26:44.054 /dev/nbd0 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:26:44.054 1+0 records in 01:26:44.054 1+0 records out 01:26:44.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427154 s, 9.6 MB/s 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 01:26:44.054 05:21:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 01:26:50.631 65536+0 records in 01:26:50.631 65536+0 records out 01:26:50.631 33554432 bytes (34 MB, 32 MiB) copied, 6.0079 s, 5.6 MB/s 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:26:50.631 [2024-12-09 05:21:41.816743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:50.631 [2024-12-09 05:21:41.849111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:50.631 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:50.632 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.632 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:50.632 "name": "raid_bdev1", 01:26:50.632 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:50.632 "strip_size_kb": 0, 01:26:50.632 "state": "online", 01:26:50.632 "raid_level": "raid1", 01:26:50.632 "superblock": false, 01:26:50.632 "num_base_bdevs": 2, 01:26:50.632 "num_base_bdevs_discovered": 1, 01:26:50.632 "num_base_bdevs_operational": 1, 01:26:50.632 "base_bdevs_list": [ 01:26:50.632 { 01:26:50.632 "name": null, 01:26:50.632 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:50.632 "is_configured": false, 01:26:50.632 "data_offset": 0, 01:26:50.632 "data_size": 65536 01:26:50.632 }, 01:26:50.632 { 01:26:50.632 "name": "BaseBdev2", 01:26:50.632 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:50.632 "is_configured": true, 01:26:50.632 "data_offset": 0, 01:26:50.632 "data_size": 65536 01:26:50.632 } 01:26:50.632 ] 01:26:50.632 }' 01:26:50.632 05:21:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:50.632 05:21:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:50.890 05:21:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:26:50.890 05:21:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.890 05:21:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:50.890 [2024-12-09 05:21:42.365338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:26:50.890 [2024-12-09 05:21:42.380608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 01:26:50.890 05:21:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.890 05:21:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 01:26:50.890 [2024-12-09 05:21:42.383131] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:51.826 "name": "raid_bdev1", 01:26:51.826 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:51.826 "strip_size_kb": 0, 01:26:51.826 "state": "online", 01:26:51.826 "raid_level": "raid1", 01:26:51.826 "superblock": false, 01:26:51.826 "num_base_bdevs": 2, 01:26:51.826 "num_base_bdevs_discovered": 2, 01:26:51.826 "num_base_bdevs_operational": 2, 01:26:51.826 "process": { 01:26:51.826 "type": "rebuild", 01:26:51.826 "target": "spare", 01:26:51.826 "progress": { 01:26:51.826 "blocks": 20480, 01:26:51.826 "percent": 31 01:26:51.826 } 01:26:51.826 }, 01:26:51.826 "base_bdevs_list": [ 01:26:51.826 { 01:26:51.826 "name": "spare", 01:26:51.826 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:51.826 "is_configured": true, 01:26:51.826 "data_offset": 0, 01:26:51.826 "data_size": 65536 01:26:51.826 }, 01:26:51.826 { 01:26:51.826 "name": "BaseBdev2", 01:26:51.826 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:51.826 "is_configured": true, 01:26:51.826 "data_offset": 0, 01:26:51.826 "data_size": 65536 01:26:51.826 } 01:26:51.826 ] 01:26:51.826 }' 01:26:51.826 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:52.086 [2024-12-09 05:21:43.548956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:26:52.086 [2024-12-09 05:21:43.593804] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:26:52.086 [2024-12-09 05:21:43.593970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:52.086 [2024-12-09 05:21:43.593995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:26:52.086 [2024-12-09 05:21:43.594010] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:52.086 "name": "raid_bdev1", 01:26:52.086 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:52.086 "strip_size_kb": 0, 01:26:52.086 "state": "online", 01:26:52.086 "raid_level": "raid1", 01:26:52.086 "superblock": false, 01:26:52.086 "num_base_bdevs": 2, 01:26:52.086 "num_base_bdevs_discovered": 1, 01:26:52.086 "num_base_bdevs_operational": 1, 01:26:52.086 "base_bdevs_list": [ 01:26:52.086 { 01:26:52.086 "name": null, 01:26:52.086 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:52.086 "is_configured": false, 01:26:52.086 "data_offset": 0, 01:26:52.086 "data_size": 65536 01:26:52.086 }, 01:26:52.086 { 01:26:52.086 "name": "BaseBdev2", 01:26:52.086 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:52.086 "is_configured": true, 01:26:52.086 "data_offset": 0, 01:26:52.086 "data_size": 65536 01:26:52.086 } 01:26:52.086 ] 01:26:52.086 }' 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:52.086 05:21:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.651 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:52.651 "name": "raid_bdev1", 01:26:52.651 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:52.651 "strip_size_kb": 0, 01:26:52.651 "state": "online", 01:26:52.651 "raid_level": "raid1", 01:26:52.651 "superblock": false, 01:26:52.651 "num_base_bdevs": 2, 01:26:52.651 "num_base_bdevs_discovered": 1, 01:26:52.651 "num_base_bdevs_operational": 1, 01:26:52.651 "base_bdevs_list": [ 01:26:52.652 { 01:26:52.652 "name": null, 01:26:52.652 "uuid": "00000000-0000-0000-0000-000000000000", 01:26:52.652 "is_configured": false, 01:26:52.652 "data_offset": 0, 01:26:52.652 "data_size": 65536 01:26:52.652 }, 01:26:52.652 { 01:26:52.652 "name": "BaseBdev2", 01:26:52.652 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:52.652 "is_configured": true, 01:26:52.652 "data_offset": 0, 01:26:52.652 "data_size": 65536 01:26:52.652 } 01:26:52.652 ] 01:26:52.652 }' 01:26:52.652 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:52.652 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:26:52.652 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:52.909 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:26:52.909 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:26:52.909 05:21:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.909 05:21:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:52.909 [2024-12-09 05:21:44.318158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:26:52.909 [2024-12-09 05:21:44.333310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 01:26:52.909 05:21:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.909 05:21:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 01:26:52.909 [2024-12-09 05:21:44.336220] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:53.871 "name": "raid_bdev1", 01:26:53.871 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:53.871 "strip_size_kb": 0, 01:26:53.871 "state": "online", 01:26:53.871 "raid_level": "raid1", 01:26:53.871 "superblock": false, 01:26:53.871 "num_base_bdevs": 2, 01:26:53.871 "num_base_bdevs_discovered": 2, 01:26:53.871 "num_base_bdevs_operational": 2, 01:26:53.871 "process": { 01:26:53.871 "type": "rebuild", 01:26:53.871 "target": "spare", 01:26:53.871 "progress": { 01:26:53.871 "blocks": 18432, 01:26:53.871 "percent": 28 01:26:53.871 } 01:26:53.871 }, 01:26:53.871 "base_bdevs_list": [ 01:26:53.871 { 01:26:53.871 "name": "spare", 01:26:53.871 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:53.871 "is_configured": true, 01:26:53.871 "data_offset": 0, 01:26:53.871 "data_size": 65536 01:26:53.871 }, 01:26:53.871 { 01:26:53.871 "name": "BaseBdev2", 01:26:53.871 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:53.871 "is_configured": true, 01:26:53.871 "data_offset": 0, 01:26:53.871 "data_size": 65536 01:26:53.871 } 01:26:53.871 ] 01:26:53.871 }' 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:26:53.871 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=407 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:54.130 "name": "raid_bdev1", 01:26:54.130 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:54.130 "strip_size_kb": 0, 01:26:54.130 "state": "online", 01:26:54.130 "raid_level": "raid1", 01:26:54.130 "superblock": false, 01:26:54.130 "num_base_bdevs": 2, 01:26:54.130 "num_base_bdevs_discovered": 2, 01:26:54.130 "num_base_bdevs_operational": 2, 01:26:54.130 "process": { 01:26:54.130 "type": "rebuild", 01:26:54.130 "target": "spare", 01:26:54.130 "progress": { 01:26:54.130 "blocks": 22528, 01:26:54.130 "percent": 34 01:26:54.130 } 01:26:54.130 }, 01:26:54.130 "base_bdevs_list": [ 01:26:54.130 { 01:26:54.130 "name": "spare", 01:26:54.130 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:54.130 "is_configured": true, 01:26:54.130 "data_offset": 0, 01:26:54.130 "data_size": 65536 01:26:54.130 }, 01:26:54.130 { 01:26:54.130 "name": "BaseBdev2", 01:26:54.130 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:54.130 "is_configured": true, 01:26:54.130 "data_offset": 0, 01:26:54.130 "data_size": 65536 01:26:54.130 } 01:26:54.130 ] 01:26:54.130 }' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:26:54.130 05:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:55.064 05:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.321 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:55.321 "name": "raid_bdev1", 01:26:55.321 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:55.321 "strip_size_kb": 0, 01:26:55.321 "state": "online", 01:26:55.321 "raid_level": "raid1", 01:26:55.321 "superblock": false, 01:26:55.321 "num_base_bdevs": 2, 01:26:55.321 "num_base_bdevs_discovered": 2, 01:26:55.321 "num_base_bdevs_operational": 2, 01:26:55.321 "process": { 01:26:55.321 "type": "rebuild", 01:26:55.321 "target": "spare", 01:26:55.321 "progress": { 01:26:55.321 "blocks": 45056, 01:26:55.321 "percent": 68 01:26:55.321 } 01:26:55.321 }, 01:26:55.321 "base_bdevs_list": [ 01:26:55.321 { 01:26:55.321 "name": "spare", 01:26:55.321 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:55.321 "is_configured": true, 01:26:55.321 "data_offset": 0, 01:26:55.321 "data_size": 65536 01:26:55.321 }, 01:26:55.321 { 01:26:55.321 "name": "BaseBdev2", 01:26:55.321 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:55.321 "is_configured": true, 01:26:55.321 "data_offset": 0, 01:26:55.321 "data_size": 65536 01:26:55.321 } 01:26:55.321 ] 01:26:55.321 }' 01:26:55.321 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:55.321 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:26:55.321 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:55.321 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:26:55.321 05:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:26:56.255 [2024-12-09 05:21:47.572819] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:26:56.255 [2024-12-09 05:21:47.572945] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:26:56.255 [2024-12-09 05:21:47.573041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.255 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:56.255 "name": "raid_bdev1", 01:26:56.255 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:56.255 "strip_size_kb": 0, 01:26:56.255 "state": "online", 01:26:56.255 "raid_level": "raid1", 01:26:56.255 "superblock": false, 01:26:56.255 "num_base_bdevs": 2, 01:26:56.255 "num_base_bdevs_discovered": 2, 01:26:56.255 "num_base_bdevs_operational": 2, 01:26:56.255 "base_bdevs_list": [ 01:26:56.255 { 01:26:56.255 "name": "spare", 01:26:56.255 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:56.255 "is_configured": true, 01:26:56.255 "data_offset": 0, 01:26:56.255 "data_size": 65536 01:26:56.255 }, 01:26:56.255 { 01:26:56.255 "name": "BaseBdev2", 01:26:56.256 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:56.256 "is_configured": true, 01:26:56.256 "data_offset": 0, 01:26:56.256 "data_size": 65536 01:26:56.256 } 01:26:56.256 ] 01:26:56.256 }' 01:26:56.256 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:56.514 05:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.514 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:26:56.514 "name": "raid_bdev1", 01:26:56.514 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:56.514 "strip_size_kb": 0, 01:26:56.514 "state": "online", 01:26:56.514 "raid_level": "raid1", 01:26:56.514 "superblock": false, 01:26:56.514 "num_base_bdevs": 2, 01:26:56.514 "num_base_bdevs_discovered": 2, 01:26:56.514 "num_base_bdevs_operational": 2, 01:26:56.514 "base_bdevs_list": [ 01:26:56.514 { 01:26:56.514 "name": "spare", 01:26:56.514 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:56.514 "is_configured": true, 01:26:56.514 "data_offset": 0, 01:26:56.514 "data_size": 65536 01:26:56.514 }, 01:26:56.514 { 01:26:56.514 "name": "BaseBdev2", 01:26:56.514 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:56.514 "is_configured": true, 01:26:56.514 "data_offset": 0, 01:26:56.514 "data_size": 65536 01:26:56.514 } 01:26:56.514 ] 01:26:56.514 }' 01:26:56.514 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:26:56.514 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:26:56.514 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.772 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:26:56.772 "name": "raid_bdev1", 01:26:56.772 "uuid": "4133e268-c8c0-4b40-b97e-c6bd3e994641", 01:26:56.772 "strip_size_kb": 0, 01:26:56.772 "state": "online", 01:26:56.772 "raid_level": "raid1", 01:26:56.772 "superblock": false, 01:26:56.772 "num_base_bdevs": 2, 01:26:56.772 "num_base_bdevs_discovered": 2, 01:26:56.772 "num_base_bdevs_operational": 2, 01:26:56.772 "base_bdevs_list": [ 01:26:56.772 { 01:26:56.772 "name": "spare", 01:26:56.772 "uuid": "d48f6f6d-0546-5eee-bd54-c4b4bb471aca", 01:26:56.772 "is_configured": true, 01:26:56.772 "data_offset": 0, 01:26:56.772 "data_size": 65536 01:26:56.772 }, 01:26:56.772 { 01:26:56.772 "name": "BaseBdev2", 01:26:56.772 "uuid": "6abe8586-e453-5c90-92b6-c89f4d2e9102", 01:26:56.772 "is_configured": true, 01:26:56.772 "data_offset": 0, 01:26:56.772 "data_size": 65536 01:26:56.772 } 01:26:56.772 ] 01:26:56.772 }' 01:26:56.773 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:26:56.773 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:57.030 [2024-12-09 05:21:48.629873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:26:57.030 [2024-12-09 05:21:48.629908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:26:57.030 [2024-12-09 05:21:48.630005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:26:57.030 [2024-12-09 05:21:48.630091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:26:57.030 [2024-12-09 05:21:48.630106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 01:26:57.030 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:57.289 05:21:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:26:57.548 /dev/nbd0 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:26:57.548 1+0 records in 01:26:57.548 1+0 records out 01:26:57.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235277 s, 17.4 MB/s 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:57.548 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:26:57.806 /dev/nbd1 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:26:57.807 1+0 records in 01:26:57.807 1+0 records out 01:26:57.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305917 s, 13.4 MB/s 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:26:57.807 05:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:58.065 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:26:58.322 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:26:58.323 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:26:58.323 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:26:58.323 05:21:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75412 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75412 ']' 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75412 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75412 01:26:58.889 killing process with pid 75412 01:26:58.889 Received shutdown signal, test time was about 60.000000 seconds 01:26:58.889 01:26:58.889 Latency(us) 01:26:58.889 [2024-12-09T05:21:50.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:58.889 [2024-12-09T05:21:50.506Z] =================================================================================================================== 01:26:58.889 [2024-12-09T05:21:50.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75412' 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75412 01:26:58.889 [2024-12-09 05:21:50.242428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:26:58.889 05:21:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75412 01:26:58.889 [2024-12-09 05:21:50.470847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:27:00.266 ************************************ 01:27:00.266 END TEST raid_rebuild_test 01:27:00.266 ************************************ 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 01:27:00.266 01:27:00.266 real 0m18.195s 01:27:00.266 user 0m20.666s 01:27:00.266 sys 0m3.324s 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:27:00.266 05:21:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 01:27:00.266 05:21:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:27:00.266 05:21:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:00.266 05:21:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:27:00.266 ************************************ 01:27:00.266 START TEST raid_rebuild_test_sb 01:27:00.266 ************************************ 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75859 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:27:00.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75859 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75859 ']' 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:00.266 05:21:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:00.266 I/O size of 3145728 is greater than zero copy threshold (65536). 01:27:00.266 Zero copy mechanism will not be used. 01:27:00.266 [2024-12-09 05:21:51.705797] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:00.266 [2024-12-09 05:21:51.706025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75859 ] 01:27:00.525 [2024-12-09 05:21:51.888606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:00.525 [2024-12-09 05:21:52.007202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:00.793 [2024-12-09 05:21:52.188343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:27:00.793 [2024-12-09 05:21:52.188807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:27:01.066 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:01.066 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 01:27:01.066 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:27:01.066 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:27:01.066 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.066 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.325 BaseBdev1_malloc 01:27:01.325 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.325 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:27:01.325 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.325 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.325 [2024-12-09 05:21:52.698474] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:27:01.325 [2024-12-09 05:21:52.698605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:01.325 [2024-12-09 05:21:52.698636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:27:01.325 [2024-12-09 05:21:52.698669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:01.325 [2024-12-09 05:21:52.701773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:01.326 [2024-12-09 05:21:52.701863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:27:01.326 BaseBdev1 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 BaseBdev2_malloc 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 [2024-12-09 05:21:52.748257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:27:01.326 [2024-12-09 05:21:52.748522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:01.326 [2024-12-09 05:21:52.748566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:27:01.326 [2024-12-09 05:21:52.748586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:01.326 [2024-12-09 05:21:52.751533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:01.326 [2024-12-09 05:21:52.751583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:27:01.326 BaseBdev2 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 spare_malloc 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 spare_delay 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 [2024-12-09 05:21:52.824399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:01.326 [2024-12-09 05:21:52.824499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:01.326 [2024-12-09 05:21:52.824526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:27:01.326 [2024-12-09 05:21:52.824543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:01.326 [2024-12-09 05:21:52.827163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:01.326 [2024-12-09 05:21:52.827242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:01.326 spare 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 [2024-12-09 05:21:52.836454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:27:01.326 [2024-12-09 05:21:52.838866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:27:01.326 [2024-12-09 05:21:52.839128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:27:01.326 [2024-12-09 05:21:52.839150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:27:01.326 [2024-12-09 05:21:52.839470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:27:01.326 [2024-12-09 05:21:52.839684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:27:01.326 [2024-12-09 05:21:52.839698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:27:01.326 [2024-12-09 05:21:52.839889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:01.326 "name": "raid_bdev1", 01:27:01.326 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:01.326 "strip_size_kb": 0, 01:27:01.326 "state": "online", 01:27:01.326 "raid_level": "raid1", 01:27:01.326 "superblock": true, 01:27:01.326 "num_base_bdevs": 2, 01:27:01.326 "num_base_bdevs_discovered": 2, 01:27:01.326 "num_base_bdevs_operational": 2, 01:27:01.326 "base_bdevs_list": [ 01:27:01.326 { 01:27:01.326 "name": "BaseBdev1", 01:27:01.326 "uuid": "ec291401-b354-598a-a444-94c69c426b82", 01:27:01.326 "is_configured": true, 01:27:01.326 "data_offset": 2048, 01:27:01.326 "data_size": 63488 01:27:01.326 }, 01:27:01.326 { 01:27:01.326 "name": "BaseBdev2", 01:27:01.326 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:01.326 "is_configured": true, 01:27:01.326 "data_offset": 2048, 01:27:01.326 "data_size": 63488 01:27:01.326 } 01:27:01.326 ] 01:27:01.326 }' 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:01.326 05:21:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:27:01.893 [2024-12-09 05:21:53.353022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:01.893 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:27:02.152 [2024-12-09 05:21:53.672948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:27:02.152 /dev/nbd0 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:02.152 1+0 records in 01:27:02.152 1+0 records out 01:27:02.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549733 s, 7.5 MB/s 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 01:27:02.152 05:21:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 01:27:08.713 63488+0 records in 01:27:08.713 63488+0 records out 01:27:08.713 32505856 bytes (33 MB, 31 MiB) copied, 5.90339 s, 5.5 MB/s 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:27:08.713 [2024-12-09 05:21:59.929701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:08.713 [2024-12-09 05:21:59.965842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:08.713 05:21:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:08.713 05:22:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:08.713 "name": "raid_bdev1", 01:27:08.713 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:08.713 "strip_size_kb": 0, 01:27:08.713 "state": "online", 01:27:08.713 "raid_level": "raid1", 01:27:08.713 "superblock": true, 01:27:08.713 "num_base_bdevs": 2, 01:27:08.713 "num_base_bdevs_discovered": 1, 01:27:08.713 "num_base_bdevs_operational": 1, 01:27:08.713 "base_bdevs_list": [ 01:27:08.713 { 01:27:08.713 "name": null, 01:27:08.713 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:08.713 "is_configured": false, 01:27:08.713 "data_offset": 0, 01:27:08.713 "data_size": 63488 01:27:08.713 }, 01:27:08.713 { 01:27:08.713 "name": "BaseBdev2", 01:27:08.713 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:08.713 "is_configured": true, 01:27:08.713 "data_offset": 2048, 01:27:08.713 "data_size": 63488 01:27:08.713 } 01:27:08.713 ] 01:27:08.713 }' 01:27:08.713 05:22:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:08.713 05:22:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:08.972 05:22:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:08.972 05:22:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:08.972 05:22:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:08.972 [2024-12-09 05:22:00.450027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:08.972 [2024-12-09 05:22:00.467780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 01:27:08.972 05:22:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:08.972 05:22:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 01:27:08.972 [2024-12-09 05:22:00.470280] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:09.907 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:10.166 "name": "raid_bdev1", 01:27:10.166 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:10.166 "strip_size_kb": 0, 01:27:10.166 "state": "online", 01:27:10.166 "raid_level": "raid1", 01:27:10.166 "superblock": true, 01:27:10.166 "num_base_bdevs": 2, 01:27:10.166 "num_base_bdevs_discovered": 2, 01:27:10.166 "num_base_bdevs_operational": 2, 01:27:10.166 "process": { 01:27:10.166 "type": "rebuild", 01:27:10.166 "target": "spare", 01:27:10.166 "progress": { 01:27:10.166 "blocks": 20480, 01:27:10.166 "percent": 32 01:27:10.166 } 01:27:10.166 }, 01:27:10.166 "base_bdevs_list": [ 01:27:10.166 { 01:27:10.166 "name": "spare", 01:27:10.166 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:10.166 "is_configured": true, 01:27:10.166 "data_offset": 2048, 01:27:10.166 "data_size": 63488 01:27:10.166 }, 01:27:10.166 { 01:27:10.166 "name": "BaseBdev2", 01:27:10.166 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:10.166 "is_configured": true, 01:27:10.166 "data_offset": 2048, 01:27:10.166 "data_size": 63488 01:27:10.166 } 01:27:10.166 ] 01:27:10.166 }' 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:10.166 [2024-12-09 05:22:01.636921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:10.166 [2024-12-09 05:22:01.679963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:10.166 [2024-12-09 05:22:01.680076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:10.166 [2024-12-09 05:22:01.680097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:10.166 [2024-12-09 05:22:01.680115] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:10.166 "name": "raid_bdev1", 01:27:10.166 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:10.166 "strip_size_kb": 0, 01:27:10.166 "state": "online", 01:27:10.166 "raid_level": "raid1", 01:27:10.166 "superblock": true, 01:27:10.166 "num_base_bdevs": 2, 01:27:10.166 "num_base_bdevs_discovered": 1, 01:27:10.166 "num_base_bdevs_operational": 1, 01:27:10.166 "base_bdevs_list": [ 01:27:10.166 { 01:27:10.166 "name": null, 01:27:10.166 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:10.166 "is_configured": false, 01:27:10.166 "data_offset": 0, 01:27:10.166 "data_size": 63488 01:27:10.166 }, 01:27:10.166 { 01:27:10.166 "name": "BaseBdev2", 01:27:10.166 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:10.166 "is_configured": true, 01:27:10.166 "data_offset": 2048, 01:27:10.166 "data_size": 63488 01:27:10.166 } 01:27:10.166 ] 01:27:10.166 }' 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:10.166 05:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:10.760 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:10.760 "name": "raid_bdev1", 01:27:10.760 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:10.760 "strip_size_kb": 0, 01:27:10.760 "state": "online", 01:27:10.760 "raid_level": "raid1", 01:27:10.760 "superblock": true, 01:27:10.760 "num_base_bdevs": 2, 01:27:10.760 "num_base_bdevs_discovered": 1, 01:27:10.760 "num_base_bdevs_operational": 1, 01:27:10.760 "base_bdevs_list": [ 01:27:10.760 { 01:27:10.760 "name": null, 01:27:10.760 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:10.761 "is_configured": false, 01:27:10.761 "data_offset": 0, 01:27:10.761 "data_size": 63488 01:27:10.761 }, 01:27:10.761 { 01:27:10.761 "name": "BaseBdev2", 01:27:10.761 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:10.761 "is_configured": true, 01:27:10.761 "data_offset": 2048, 01:27:10.761 "data_size": 63488 01:27:10.761 } 01:27:10.761 ] 01:27:10.761 }' 01:27:10.761 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:10.761 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:10.761 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:11.019 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:11.019 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:11.019 05:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:11.019 05:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:11.019 [2024-12-09 05:22:02.403360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:11.019 [2024-12-09 05:22:02.419545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 01:27:11.019 05:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:11.019 05:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 01:27:11.019 [2024-12-09 05:22:02.422241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:11.953 "name": "raid_bdev1", 01:27:11.953 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:11.953 "strip_size_kb": 0, 01:27:11.953 "state": "online", 01:27:11.953 "raid_level": "raid1", 01:27:11.953 "superblock": true, 01:27:11.953 "num_base_bdevs": 2, 01:27:11.953 "num_base_bdevs_discovered": 2, 01:27:11.953 "num_base_bdevs_operational": 2, 01:27:11.953 "process": { 01:27:11.953 "type": "rebuild", 01:27:11.953 "target": "spare", 01:27:11.953 "progress": { 01:27:11.953 "blocks": 20480, 01:27:11.953 "percent": 32 01:27:11.953 } 01:27:11.953 }, 01:27:11.953 "base_bdevs_list": [ 01:27:11.953 { 01:27:11.953 "name": "spare", 01:27:11.953 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:11.953 "is_configured": true, 01:27:11.953 "data_offset": 2048, 01:27:11.953 "data_size": 63488 01:27:11.953 }, 01:27:11.953 { 01:27:11.953 "name": "BaseBdev2", 01:27:11.953 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:11.953 "is_configured": true, 01:27:11.953 "data_offset": 2048, 01:27:11.953 "data_size": 63488 01:27:11.953 } 01:27:11.953 ] 01:27:11.953 }' 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:11.953 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:27:12.211 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=425 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:12.211 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:12.211 "name": "raid_bdev1", 01:27:12.211 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:12.211 "strip_size_kb": 0, 01:27:12.211 "state": "online", 01:27:12.211 "raid_level": "raid1", 01:27:12.211 "superblock": true, 01:27:12.211 "num_base_bdevs": 2, 01:27:12.211 "num_base_bdevs_discovered": 2, 01:27:12.211 "num_base_bdevs_operational": 2, 01:27:12.211 "process": { 01:27:12.211 "type": "rebuild", 01:27:12.211 "target": "spare", 01:27:12.211 "progress": { 01:27:12.211 "blocks": 22528, 01:27:12.211 "percent": 35 01:27:12.211 } 01:27:12.211 }, 01:27:12.211 "base_bdevs_list": [ 01:27:12.211 { 01:27:12.212 "name": "spare", 01:27:12.212 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:12.212 "is_configured": true, 01:27:12.212 "data_offset": 2048, 01:27:12.212 "data_size": 63488 01:27:12.212 }, 01:27:12.212 { 01:27:12.212 "name": "BaseBdev2", 01:27:12.212 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:12.212 "is_configured": true, 01:27:12.212 "data_offset": 2048, 01:27:12.212 "data_size": 63488 01:27:12.212 } 01:27:12.212 ] 01:27:12.212 }' 01:27:12.212 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:12.212 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:12.212 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:12.212 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:12.212 05:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:13.144 05:22:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:13.403 "name": "raid_bdev1", 01:27:13.403 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:13.403 "strip_size_kb": 0, 01:27:13.403 "state": "online", 01:27:13.403 "raid_level": "raid1", 01:27:13.403 "superblock": true, 01:27:13.403 "num_base_bdevs": 2, 01:27:13.403 "num_base_bdevs_discovered": 2, 01:27:13.403 "num_base_bdevs_operational": 2, 01:27:13.403 "process": { 01:27:13.403 "type": "rebuild", 01:27:13.403 "target": "spare", 01:27:13.403 "progress": { 01:27:13.403 "blocks": 47104, 01:27:13.403 "percent": 74 01:27:13.403 } 01:27:13.403 }, 01:27:13.403 "base_bdevs_list": [ 01:27:13.403 { 01:27:13.403 "name": "spare", 01:27:13.403 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:13.403 "is_configured": true, 01:27:13.403 "data_offset": 2048, 01:27:13.403 "data_size": 63488 01:27:13.403 }, 01:27:13.403 { 01:27:13.403 "name": "BaseBdev2", 01:27:13.403 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:13.403 "is_configured": true, 01:27:13.403 "data_offset": 2048, 01:27:13.403 "data_size": 63488 01:27:13.403 } 01:27:13.403 ] 01:27:13.403 }' 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:13.403 05:22:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:13.968 [2024-12-09 05:22:05.547552] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:27:13.968 [2024-12-09 05:22:05.547700] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:27:13.968 [2024-12-09 05:22:05.547891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:14.535 "name": "raid_bdev1", 01:27:14.535 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:14.535 "strip_size_kb": 0, 01:27:14.535 "state": "online", 01:27:14.535 "raid_level": "raid1", 01:27:14.535 "superblock": true, 01:27:14.535 "num_base_bdevs": 2, 01:27:14.535 "num_base_bdevs_discovered": 2, 01:27:14.535 "num_base_bdevs_operational": 2, 01:27:14.535 "base_bdevs_list": [ 01:27:14.535 { 01:27:14.535 "name": "spare", 01:27:14.535 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:14.535 "is_configured": true, 01:27:14.535 "data_offset": 2048, 01:27:14.535 "data_size": 63488 01:27:14.535 }, 01:27:14.535 { 01:27:14.535 "name": "BaseBdev2", 01:27:14.535 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:14.535 "is_configured": true, 01:27:14.535 "data_offset": 2048, 01:27:14.535 "data_size": 63488 01:27:14.535 } 01:27:14.535 ] 01:27:14.535 }' 01:27:14.535 05:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:14.535 "name": "raid_bdev1", 01:27:14.535 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:14.535 "strip_size_kb": 0, 01:27:14.535 "state": "online", 01:27:14.535 "raid_level": "raid1", 01:27:14.535 "superblock": true, 01:27:14.535 "num_base_bdevs": 2, 01:27:14.535 "num_base_bdevs_discovered": 2, 01:27:14.535 "num_base_bdevs_operational": 2, 01:27:14.535 "base_bdevs_list": [ 01:27:14.535 { 01:27:14.535 "name": "spare", 01:27:14.535 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:14.535 "is_configured": true, 01:27:14.535 "data_offset": 2048, 01:27:14.535 "data_size": 63488 01:27:14.535 }, 01:27:14.535 { 01:27:14.535 "name": "BaseBdev2", 01:27:14.535 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:14.535 "is_configured": true, 01:27:14.535 "data_offset": 2048, 01:27:14.535 "data_size": 63488 01:27:14.535 } 01:27:14.535 ] 01:27:14.535 }' 01:27:14.535 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:14.794 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:14.794 "name": "raid_bdev1", 01:27:14.794 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:14.794 "strip_size_kb": 0, 01:27:14.794 "state": "online", 01:27:14.794 "raid_level": "raid1", 01:27:14.794 "superblock": true, 01:27:14.794 "num_base_bdevs": 2, 01:27:14.794 "num_base_bdevs_discovered": 2, 01:27:14.794 "num_base_bdevs_operational": 2, 01:27:14.794 "base_bdevs_list": [ 01:27:14.794 { 01:27:14.794 "name": "spare", 01:27:14.794 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:14.794 "is_configured": true, 01:27:14.794 "data_offset": 2048, 01:27:14.795 "data_size": 63488 01:27:14.795 }, 01:27:14.795 { 01:27:14.795 "name": "BaseBdev2", 01:27:14.795 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:14.795 "is_configured": true, 01:27:14.795 "data_offset": 2048, 01:27:14.795 "data_size": 63488 01:27:14.795 } 01:27:14.795 ] 01:27:14.795 }' 01:27:14.795 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:14.795 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:15.361 [2024-12-09 05:22:06.756747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:27:15.361 [2024-12-09 05:22:06.756821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:27:15.361 [2024-12-09 05:22:06.756918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:27:15.361 [2024-12-09 05:22:06.757024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:27:15.361 [2024-12-09 05:22:06.757041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:27:15.361 05:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:27:15.619 /dev/nbd0 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:15.619 1+0 records in 01:27:15.619 1+0 records out 01:27:15.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351552 s, 11.7 MB/s 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:27:15.619 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:27:15.878 /dev/nbd1 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:15.878 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:16.137 1+0 records in 01:27:16.137 1+0 records out 01:27:16.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358303 s, 11.4 MB/s 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:16.137 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:27:16.395 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:16.396 05:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:16.964 [2024-12-09 05:22:08.316025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:16.964 [2024-12-09 05:22:08.316136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:16.964 [2024-12-09 05:22:08.316172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:27:16.964 [2024-12-09 05:22:08.316187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:16.964 [2024-12-09 05:22:08.319308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:16.964 [2024-12-09 05:22:08.319398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:16.964 [2024-12-09 05:22:08.319532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:27:16.964 [2024-12-09 05:22:08.319601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:16.964 [2024-12-09 05:22:08.319775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:27:16.964 spare 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:16.964 [2024-12-09 05:22:08.419895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:27:16.964 [2024-12-09 05:22:08.419958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:27:16.964 [2024-12-09 05:22:08.420324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 01:27:16.964 [2024-12-09 05:22:08.420627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:27:16.964 [2024-12-09 05:22:08.420651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:27:16.964 [2024-12-09 05:22:08.420869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:16.964 "name": "raid_bdev1", 01:27:16.964 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:16.964 "strip_size_kb": 0, 01:27:16.964 "state": "online", 01:27:16.964 "raid_level": "raid1", 01:27:16.964 "superblock": true, 01:27:16.964 "num_base_bdevs": 2, 01:27:16.964 "num_base_bdevs_discovered": 2, 01:27:16.964 "num_base_bdevs_operational": 2, 01:27:16.964 "base_bdevs_list": [ 01:27:16.964 { 01:27:16.964 "name": "spare", 01:27:16.964 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:16.964 "is_configured": true, 01:27:16.964 "data_offset": 2048, 01:27:16.964 "data_size": 63488 01:27:16.964 }, 01:27:16.964 { 01:27:16.964 "name": "BaseBdev2", 01:27:16.964 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:16.964 "is_configured": true, 01:27:16.964 "data_offset": 2048, 01:27:16.964 "data_size": 63488 01:27:16.964 } 01:27:16.964 ] 01:27:16.964 }' 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:16.964 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:17.531 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:17.531 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:17.531 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:17.531 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:17.532 "name": "raid_bdev1", 01:27:17.532 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:17.532 "strip_size_kb": 0, 01:27:17.532 "state": "online", 01:27:17.532 "raid_level": "raid1", 01:27:17.532 "superblock": true, 01:27:17.532 "num_base_bdevs": 2, 01:27:17.532 "num_base_bdevs_discovered": 2, 01:27:17.532 "num_base_bdevs_operational": 2, 01:27:17.532 "base_bdevs_list": [ 01:27:17.532 { 01:27:17.532 "name": "spare", 01:27:17.532 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:17.532 "is_configured": true, 01:27:17.532 "data_offset": 2048, 01:27:17.532 "data_size": 63488 01:27:17.532 }, 01:27:17.532 { 01:27:17.532 "name": "BaseBdev2", 01:27:17.532 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:17.532 "is_configured": true, 01:27:17.532 "data_offset": 2048, 01:27:17.532 "data_size": 63488 01:27:17.532 } 01:27:17.532 ] 01:27:17.532 }' 01:27:17.532 05:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:27:17.532 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:17.791 [2024-12-09 05:22:09.169115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:17.791 "name": "raid_bdev1", 01:27:17.791 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:17.791 "strip_size_kb": 0, 01:27:17.791 "state": "online", 01:27:17.791 "raid_level": "raid1", 01:27:17.791 "superblock": true, 01:27:17.791 "num_base_bdevs": 2, 01:27:17.791 "num_base_bdevs_discovered": 1, 01:27:17.791 "num_base_bdevs_operational": 1, 01:27:17.791 "base_bdevs_list": [ 01:27:17.791 { 01:27:17.791 "name": null, 01:27:17.791 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:17.791 "is_configured": false, 01:27:17.791 "data_offset": 0, 01:27:17.791 "data_size": 63488 01:27:17.791 }, 01:27:17.791 { 01:27:17.791 "name": "BaseBdev2", 01:27:17.791 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:17.791 "is_configured": true, 01:27:17.791 "data_offset": 2048, 01:27:17.791 "data_size": 63488 01:27:17.791 } 01:27:17.791 ] 01:27:17.791 }' 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:17.791 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:18.357 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:18.357 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:18.357 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:18.357 [2024-12-09 05:22:09.681329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:18.357 [2024-12-09 05:22:09.681657] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:27:18.357 [2024-12-09 05:22:09.681696] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:27:18.357 [2024-12-09 05:22:09.681774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:18.357 [2024-12-09 05:22:09.698503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 01:27:18.357 05:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:18.357 05:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 01:27:18.358 [2024-12-09 05:22:09.701122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:19.343 "name": "raid_bdev1", 01:27:19.343 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:19.343 "strip_size_kb": 0, 01:27:19.343 "state": "online", 01:27:19.343 "raid_level": "raid1", 01:27:19.343 "superblock": true, 01:27:19.343 "num_base_bdevs": 2, 01:27:19.343 "num_base_bdevs_discovered": 2, 01:27:19.343 "num_base_bdevs_operational": 2, 01:27:19.343 "process": { 01:27:19.343 "type": "rebuild", 01:27:19.343 "target": "spare", 01:27:19.343 "progress": { 01:27:19.343 "blocks": 20480, 01:27:19.343 "percent": 32 01:27:19.343 } 01:27:19.343 }, 01:27:19.343 "base_bdevs_list": [ 01:27:19.343 { 01:27:19.343 "name": "spare", 01:27:19.343 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:19.343 "is_configured": true, 01:27:19.343 "data_offset": 2048, 01:27:19.343 "data_size": 63488 01:27:19.343 }, 01:27:19.343 { 01:27:19.343 "name": "BaseBdev2", 01:27:19.343 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:19.343 "is_configured": true, 01:27:19.343 "data_offset": 2048, 01:27:19.343 "data_size": 63488 01:27:19.343 } 01:27:19.343 ] 01:27:19.343 }' 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:19.343 [2024-12-09 05:22:10.858706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:19.343 [2024-12-09 05:22:10.910664] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:19.343 [2024-12-09 05:22:10.910827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:19.343 [2024-12-09 05:22:10.910853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:19.343 [2024-12-09 05:22:10.910869] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:19.343 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:19.602 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:19.602 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:19.602 "name": "raid_bdev1", 01:27:19.602 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:19.602 "strip_size_kb": 0, 01:27:19.602 "state": "online", 01:27:19.602 "raid_level": "raid1", 01:27:19.602 "superblock": true, 01:27:19.602 "num_base_bdevs": 2, 01:27:19.602 "num_base_bdevs_discovered": 1, 01:27:19.602 "num_base_bdevs_operational": 1, 01:27:19.602 "base_bdevs_list": [ 01:27:19.602 { 01:27:19.602 "name": null, 01:27:19.602 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:19.602 "is_configured": false, 01:27:19.602 "data_offset": 0, 01:27:19.602 "data_size": 63488 01:27:19.602 }, 01:27:19.602 { 01:27:19.602 "name": "BaseBdev2", 01:27:19.602 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:19.602 "is_configured": true, 01:27:19.602 "data_offset": 2048, 01:27:19.602 "data_size": 63488 01:27:19.602 } 01:27:19.602 ] 01:27:19.602 }' 01:27:19.602 05:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:19.602 05:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:19.860 05:22:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:19.860 05:22:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:19.860 05:22:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:19.860 [2024-12-09 05:22:11.438337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:19.860 [2024-12-09 05:22:11.438470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:19.860 [2024-12-09 05:22:11.438502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:27:19.860 [2024-12-09 05:22:11.438520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:19.860 [2024-12-09 05:22:11.439133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:19.860 [2024-12-09 05:22:11.439165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:19.860 [2024-12-09 05:22:11.439287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:27:19.860 [2024-12-09 05:22:11.439312] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:27:19.860 [2024-12-09 05:22:11.439327] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:27:19.860 [2024-12-09 05:22:11.439383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:19.860 [2024-12-09 05:22:11.455217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 01:27:19.860 spare 01:27:19.860 05:22:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:19.860 [2024-12-09 05:22:11.457935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:19.860 05:22:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:21.233 "name": "raid_bdev1", 01:27:21.233 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:21.233 "strip_size_kb": 0, 01:27:21.233 "state": "online", 01:27:21.233 "raid_level": "raid1", 01:27:21.233 "superblock": true, 01:27:21.233 "num_base_bdevs": 2, 01:27:21.233 "num_base_bdevs_discovered": 2, 01:27:21.233 "num_base_bdevs_operational": 2, 01:27:21.233 "process": { 01:27:21.233 "type": "rebuild", 01:27:21.233 "target": "spare", 01:27:21.233 "progress": { 01:27:21.233 "blocks": 20480, 01:27:21.233 "percent": 32 01:27:21.233 } 01:27:21.233 }, 01:27:21.233 "base_bdevs_list": [ 01:27:21.233 { 01:27:21.233 "name": "spare", 01:27:21.233 "uuid": "d65c2bbc-ffe4-58ec-abcb-6b8d73aa71e6", 01:27:21.233 "is_configured": true, 01:27:21.233 "data_offset": 2048, 01:27:21.233 "data_size": 63488 01:27:21.233 }, 01:27:21.233 { 01:27:21.233 "name": "BaseBdev2", 01:27:21.233 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:21.233 "is_configured": true, 01:27:21.233 "data_offset": 2048, 01:27:21.233 "data_size": 63488 01:27:21.233 } 01:27:21.233 ] 01:27:21.233 }' 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:21.233 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.233 [2024-12-09 05:22:12.627685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:21.234 [2024-12-09 05:22:12.667575] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:21.234 [2024-12-09 05:22:12.667671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:21.234 [2024-12-09 05:22:12.667698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:21.234 [2024-12-09 05:22:12.667710] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:21.234 "name": "raid_bdev1", 01:27:21.234 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:21.234 "strip_size_kb": 0, 01:27:21.234 "state": "online", 01:27:21.234 "raid_level": "raid1", 01:27:21.234 "superblock": true, 01:27:21.234 "num_base_bdevs": 2, 01:27:21.234 "num_base_bdevs_discovered": 1, 01:27:21.234 "num_base_bdevs_operational": 1, 01:27:21.234 "base_bdevs_list": [ 01:27:21.234 { 01:27:21.234 "name": null, 01:27:21.234 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:21.234 "is_configured": false, 01:27:21.234 "data_offset": 0, 01:27:21.234 "data_size": 63488 01:27:21.234 }, 01:27:21.234 { 01:27:21.234 "name": "BaseBdev2", 01:27:21.234 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:21.234 "is_configured": true, 01:27:21.234 "data_offset": 2048, 01:27:21.234 "data_size": 63488 01:27:21.234 } 01:27:21.234 ] 01:27:21.234 }' 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:21.234 05:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.801 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:21.801 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:21.801 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:21.802 "name": "raid_bdev1", 01:27:21.802 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:21.802 "strip_size_kb": 0, 01:27:21.802 "state": "online", 01:27:21.802 "raid_level": "raid1", 01:27:21.802 "superblock": true, 01:27:21.802 "num_base_bdevs": 2, 01:27:21.802 "num_base_bdevs_discovered": 1, 01:27:21.802 "num_base_bdevs_operational": 1, 01:27:21.802 "base_bdevs_list": [ 01:27:21.802 { 01:27:21.802 "name": null, 01:27:21.802 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:21.802 "is_configured": false, 01:27:21.802 "data_offset": 0, 01:27:21.802 "data_size": 63488 01:27:21.802 }, 01:27:21.802 { 01:27:21.802 "name": "BaseBdev2", 01:27:21.802 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:21.802 "is_configured": true, 01:27:21.802 "data_offset": 2048, 01:27:21.802 "data_size": 63488 01:27:21.802 } 01:27:21.802 ] 01:27:21.802 }' 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:21.802 [2024-12-09 05:22:13.380775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:27:21.802 [2024-12-09 05:22:13.380895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:21.802 [2024-12-09 05:22:13.380933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:27:21.802 [2024-12-09 05:22:13.380960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:21.802 [2024-12-09 05:22:13.381584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:21.802 [2024-12-09 05:22:13.381611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:27:21.802 [2024-12-09 05:22:13.381726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:27:21.802 [2024-12-09 05:22:13.381747] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:27:21.802 [2024-12-09 05:22:13.381761] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:27:21.802 [2024-12-09 05:22:13.381774] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:27:21.802 BaseBdev1 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:21.802 05:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 01:27:23.176 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:23.176 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:23.176 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:23.176 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:23.176 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:23.176 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:23.177 "name": "raid_bdev1", 01:27:23.177 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:23.177 "strip_size_kb": 0, 01:27:23.177 "state": "online", 01:27:23.177 "raid_level": "raid1", 01:27:23.177 "superblock": true, 01:27:23.177 "num_base_bdevs": 2, 01:27:23.177 "num_base_bdevs_discovered": 1, 01:27:23.177 "num_base_bdevs_operational": 1, 01:27:23.177 "base_bdevs_list": [ 01:27:23.177 { 01:27:23.177 "name": null, 01:27:23.177 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:23.177 "is_configured": false, 01:27:23.177 "data_offset": 0, 01:27:23.177 "data_size": 63488 01:27:23.177 }, 01:27:23.177 { 01:27:23.177 "name": "BaseBdev2", 01:27:23.177 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:23.177 "is_configured": true, 01:27:23.177 "data_offset": 2048, 01:27:23.177 "data_size": 63488 01:27:23.177 } 01:27:23.177 ] 01:27:23.177 }' 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:23.177 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:23.436 "name": "raid_bdev1", 01:27:23.436 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:23.436 "strip_size_kb": 0, 01:27:23.436 "state": "online", 01:27:23.436 "raid_level": "raid1", 01:27:23.436 "superblock": true, 01:27:23.436 "num_base_bdevs": 2, 01:27:23.436 "num_base_bdevs_discovered": 1, 01:27:23.436 "num_base_bdevs_operational": 1, 01:27:23.436 "base_bdevs_list": [ 01:27:23.436 { 01:27:23.436 "name": null, 01:27:23.436 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:23.436 "is_configured": false, 01:27:23.436 "data_offset": 0, 01:27:23.436 "data_size": 63488 01:27:23.436 }, 01:27:23.436 { 01:27:23.436 "name": "BaseBdev2", 01:27:23.436 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:23.436 "is_configured": true, 01:27:23.436 "data_offset": 2048, 01:27:23.436 "data_size": 63488 01:27:23.436 } 01:27:23.436 ] 01:27:23.436 }' 01:27:23.436 05:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:23.436 05:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:23.436 05:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:23.701 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:23.701 [2024-12-09 05:22:15.089583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:27:23.701 [2024-12-09 05:22:15.089811] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:27:23.701 [2024-12-09 05:22:15.089854] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:27:23.702 request: 01:27:23.702 { 01:27:23.702 "base_bdev": "BaseBdev1", 01:27:23.702 "raid_bdev": "raid_bdev1", 01:27:23.702 "method": "bdev_raid_add_base_bdev", 01:27:23.702 "req_id": 1 01:27:23.702 } 01:27:23.702 Got JSON-RPC error response 01:27:23.702 response: 01:27:23.702 { 01:27:23.702 "code": -22, 01:27:23.702 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:27:23.702 } 01:27:23.702 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:27:23.702 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 01:27:23.702 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:23.702 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:23.702 05:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:23.702 05:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:24.638 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:24.639 "name": "raid_bdev1", 01:27:24.639 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:24.639 "strip_size_kb": 0, 01:27:24.639 "state": "online", 01:27:24.639 "raid_level": "raid1", 01:27:24.639 "superblock": true, 01:27:24.639 "num_base_bdevs": 2, 01:27:24.639 "num_base_bdevs_discovered": 1, 01:27:24.639 "num_base_bdevs_operational": 1, 01:27:24.639 "base_bdevs_list": [ 01:27:24.639 { 01:27:24.639 "name": null, 01:27:24.639 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:24.639 "is_configured": false, 01:27:24.639 "data_offset": 0, 01:27:24.639 "data_size": 63488 01:27:24.639 }, 01:27:24.639 { 01:27:24.639 "name": "BaseBdev2", 01:27:24.639 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:24.639 "is_configured": true, 01:27:24.639 "data_offset": 2048, 01:27:24.639 "data_size": 63488 01:27:24.639 } 01:27:24.639 ] 01:27:24.639 }' 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:24.639 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:25.216 "name": "raid_bdev1", 01:27:25.216 "uuid": "091b37a6-ade7-452b-91f5-4c8f05f0b795", 01:27:25.216 "strip_size_kb": 0, 01:27:25.216 "state": "online", 01:27:25.216 "raid_level": "raid1", 01:27:25.216 "superblock": true, 01:27:25.216 "num_base_bdevs": 2, 01:27:25.216 "num_base_bdevs_discovered": 1, 01:27:25.216 "num_base_bdevs_operational": 1, 01:27:25.216 "base_bdevs_list": [ 01:27:25.216 { 01:27:25.216 "name": null, 01:27:25.216 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:25.216 "is_configured": false, 01:27:25.216 "data_offset": 0, 01:27:25.216 "data_size": 63488 01:27:25.216 }, 01:27:25.216 { 01:27:25.216 "name": "BaseBdev2", 01:27:25.216 "uuid": "c3fa4213-441e-5251-b002-67625fc7611b", 01:27:25.216 "is_configured": true, 01:27:25.216 "data_offset": 2048, 01:27:25.216 "data_size": 63488 01:27:25.216 } 01:27:25.216 ] 01:27:25.216 }' 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75859 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75859 ']' 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75859 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:25.216 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75859 01:27:25.473 killing process with pid 75859 01:27:25.474 Received shutdown signal, test time was about 60.000000 seconds 01:27:25.474 01:27:25.474 Latency(us) 01:27:25.474 [2024-12-09T05:22:17.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:25.474 [2024-12-09T05:22:17.091Z] =================================================================================================================== 01:27:25.474 [2024-12-09T05:22:17.091Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:27:25.474 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:25.474 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:25.474 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75859' 01:27:25.474 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75859 01:27:25.474 [2024-12-09 05:22:16.832319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:27:25.474 [2024-12-09 05:22:16.832561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:27:25.474 [2024-12-09 05:22:16.832655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:27:25.474 [2024-12-09 05:22:16.832677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:27:25.474 05:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75859 01:27:25.731 [2024-12-09 05:22:17.119477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:27:27.116 ************************************ 01:27:27.116 END TEST raid_rebuild_test_sb 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 01:27:27.116 01:27:27.116 real 0m26.769s 01:27:27.116 user 0m32.836s 01:27:27.116 sys 0m4.078s 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:27:27.116 ************************************ 01:27:27.116 05:22:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 01:27:27.116 05:22:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:27:27.116 05:22:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:27.116 05:22:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:27:27.116 ************************************ 01:27:27.116 START TEST raid_rebuild_test_io 01:27:27.116 ************************************ 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:27.116 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 01:27:27.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76621 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76621 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76621 ']' 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:27.117 05:22:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:27.117 [2024-12-09 05:22:18.588408] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:27.117 [2024-12-09 05:22:18.588878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76621 ] 01:27:27.117 I/O size of 3145728 is greater than zero copy threshold (65536). 01:27:27.117 Zero copy mechanism will not be used. 01:27:27.398 [2024-12-09 05:22:18.760771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:27.398 [2024-12-09 05:22:18.881395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:27.656 [2024-12-09 05:22:19.112037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:27:27.656 [2024-12-09 05:22:19.112102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:27:27.915 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:27.915 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 01:27:27.915 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:27:27.915 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:27:27.915 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:27.915 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.173 BaseBdev1_malloc 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.173 [2024-12-09 05:22:19.567060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:27:28.173 [2024-12-09 05:22:19.567557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:28.173 [2024-12-09 05:22:19.567612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:27:28.173 [2024-12-09 05:22:19.567636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:28.173 [2024-12-09 05:22:19.571141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:28.173 [2024-12-09 05:22:19.571188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:27:28.173 BaseBdev1 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.173 BaseBdev2_malloc 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.173 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.173 [2024-12-09 05:22:19.624573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:27:28.173 [2024-12-09 05:22:19.624663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:28.173 [2024-12-09 05:22:19.624698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:27:28.173 [2024-12-09 05:22:19.624732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:28.173 [2024-12-09 05:22:19.627872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:28.173 [2024-12-09 05:22:19.627915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:27:28.173 BaseBdev2 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.174 spare_malloc 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.174 spare_delay 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.174 [2024-12-09 05:22:19.700201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:28.174 [2024-12-09 05:22:19.700288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:28.174 [2024-12-09 05:22:19.700316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:27:28.174 [2024-12-09 05:22:19.700333] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:28.174 [2024-12-09 05:22:19.703622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:28.174 [2024-12-09 05:22:19.703676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:28.174 spare 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.174 [2024-12-09 05:22:19.708382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:27:28.174 [2024-12-09 05:22:19.711429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:27:28.174 [2024-12-09 05:22:19.711604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:27:28.174 [2024-12-09 05:22:19.711628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:27:28.174 [2024-12-09 05:22:19.711979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:27:28.174 [2024-12-09 05:22:19.712232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:27:28.174 [2024-12-09 05:22:19.712250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:27:28.174 [2024-12-09 05:22:19.712531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:28.174 "name": "raid_bdev1", 01:27:28.174 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:28.174 "strip_size_kb": 0, 01:27:28.174 "state": "online", 01:27:28.174 "raid_level": "raid1", 01:27:28.174 "superblock": false, 01:27:28.174 "num_base_bdevs": 2, 01:27:28.174 "num_base_bdevs_discovered": 2, 01:27:28.174 "num_base_bdevs_operational": 2, 01:27:28.174 "base_bdevs_list": [ 01:27:28.174 { 01:27:28.174 "name": "BaseBdev1", 01:27:28.174 "uuid": "5983d566-4527-5808-b1ca-6269b338d79c", 01:27:28.174 "is_configured": true, 01:27:28.174 "data_offset": 0, 01:27:28.174 "data_size": 65536 01:27:28.174 }, 01:27:28.174 { 01:27:28.174 "name": "BaseBdev2", 01:27:28.174 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:28.174 "is_configured": true, 01:27:28.174 "data_offset": 0, 01:27:28.174 "data_size": 65536 01:27:28.174 } 01:27:28.174 ] 01:27:28.174 }' 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:28.174 05:22:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.740 [2024-12-09 05:22:20.229210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.740 [2024-12-09 05:22:20.332824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:28.740 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:28.741 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.998 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:28.998 "name": "raid_bdev1", 01:27:28.998 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:28.998 "strip_size_kb": 0, 01:27:28.998 "state": "online", 01:27:28.998 "raid_level": "raid1", 01:27:28.998 "superblock": false, 01:27:28.998 "num_base_bdevs": 2, 01:27:28.998 "num_base_bdevs_discovered": 1, 01:27:28.998 "num_base_bdevs_operational": 1, 01:27:28.998 "base_bdevs_list": [ 01:27:28.998 { 01:27:28.998 "name": null, 01:27:28.998 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:28.998 "is_configured": false, 01:27:28.998 "data_offset": 0, 01:27:28.998 "data_size": 65536 01:27:28.998 }, 01:27:28.998 { 01:27:28.998 "name": "BaseBdev2", 01:27:28.998 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:28.998 "is_configured": true, 01:27:28.998 "data_offset": 0, 01:27:28.998 "data_size": 65536 01:27:28.998 } 01:27:28.998 ] 01:27:28.998 }' 01:27:28.998 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:28.998 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:28.998 [2024-12-09 05:22:20.470439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:27:28.998 I/O size of 3145728 is greater than zero copy threshold (65536). 01:27:28.998 Zero copy mechanism will not be used. 01:27:28.998 Running I/O for 60 seconds... 01:27:29.256 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:29.256 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:29.256 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:29.515 [2024-12-09 05:22:20.876885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:29.515 05:22:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:29.515 05:22:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 01:27:29.515 [2024-12-09 05:22:20.962130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:27:29.515 [2024-12-09 05:22:20.965200] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:29.515 [2024-12-09 05:22:21.068218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:29.515 [2024-12-09 05:22:21.070480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:29.774 [2024-12-09 05:22:21.320468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:29.774 [2024-12-09 05:22:21.322224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:30.290 165.00 IOPS, 495.00 MiB/s [2024-12-09T05:22:21.907Z] [2024-12-09 05:22:21.692120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 01:27:30.290 [2024-12-09 05:22:21.807451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:30.548 "name": "raid_bdev1", 01:27:30.548 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:30.548 "strip_size_kb": 0, 01:27:30.548 "state": "online", 01:27:30.548 "raid_level": "raid1", 01:27:30.548 "superblock": false, 01:27:30.548 "num_base_bdevs": 2, 01:27:30.548 "num_base_bdevs_discovered": 2, 01:27:30.548 "num_base_bdevs_operational": 2, 01:27:30.548 "process": { 01:27:30.548 "type": "rebuild", 01:27:30.548 "target": "spare", 01:27:30.548 "progress": { 01:27:30.548 "blocks": 12288, 01:27:30.548 "percent": 18 01:27:30.548 } 01:27:30.548 }, 01:27:30.548 "base_bdevs_list": [ 01:27:30.548 { 01:27:30.548 "name": "spare", 01:27:30.548 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:30.548 "is_configured": true, 01:27:30.548 "data_offset": 0, 01:27:30.548 "data_size": 65536 01:27:30.548 }, 01:27:30.548 { 01:27:30.548 "name": "BaseBdev2", 01:27:30.548 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:30.548 "is_configured": true, 01:27:30.548 "data_offset": 0, 01:27:30.548 "data_size": 65536 01:27:30.548 } 01:27:30.548 ] 01:27:30.548 }' 01:27:30.548 05:22:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:30.548 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:30.548 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:30.548 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:30.548 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:27:30.548 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:30.548 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:30.548 [2024-12-09 05:22:22.102659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:30.548 [2024-12-09 05:22:22.159020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 01:27:30.806 [2024-12-09 05:22:22.269625] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:30.806 [2024-12-09 05:22:22.286161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:30.806 [2024-12-09 05:22:22.286646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:30.806 [2024-12-09 05:22:22.286686] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:30.806 [2024-12-09 05:22:22.338577] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:30.806 "name": "raid_bdev1", 01:27:30.806 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:30.806 "strip_size_kb": 0, 01:27:30.806 "state": "online", 01:27:30.806 "raid_level": "raid1", 01:27:30.806 "superblock": false, 01:27:30.806 "num_base_bdevs": 2, 01:27:30.806 "num_base_bdevs_discovered": 1, 01:27:30.806 "num_base_bdevs_operational": 1, 01:27:30.806 "base_bdevs_list": [ 01:27:30.806 { 01:27:30.806 "name": null, 01:27:30.806 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:30.806 "is_configured": false, 01:27:30.806 "data_offset": 0, 01:27:30.806 "data_size": 65536 01:27:30.806 }, 01:27:30.806 { 01:27:30.806 "name": "BaseBdev2", 01:27:30.806 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:30.806 "is_configured": true, 01:27:30.806 "data_offset": 0, 01:27:30.806 "data_size": 65536 01:27:30.806 } 01:27:30.806 ] 01:27:30.806 }' 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:30.806 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:31.325 115.50 IOPS, 346.50 MiB/s [2024-12-09T05:22:22.942Z] 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:31.325 05:22:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:31.584 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:31.584 "name": "raid_bdev1", 01:27:31.584 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:31.584 "strip_size_kb": 0, 01:27:31.584 "state": "online", 01:27:31.584 "raid_level": "raid1", 01:27:31.584 "superblock": false, 01:27:31.584 "num_base_bdevs": 2, 01:27:31.584 "num_base_bdevs_discovered": 1, 01:27:31.584 "num_base_bdevs_operational": 1, 01:27:31.584 "base_bdevs_list": [ 01:27:31.584 { 01:27:31.584 "name": null, 01:27:31.584 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:31.584 "is_configured": false, 01:27:31.584 "data_offset": 0, 01:27:31.584 "data_size": 65536 01:27:31.584 }, 01:27:31.584 { 01:27:31.584 "name": "BaseBdev2", 01:27:31.584 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:31.584 "is_configured": true, 01:27:31.584 "data_offset": 0, 01:27:31.584 "data_size": 65536 01:27:31.584 } 01:27:31.584 ] 01:27:31.584 }' 01:27:31.584 05:22:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:31.584 [2024-12-09 05:22:23.066758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:31.584 05:22:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 01:27:31.584 [2024-12-09 05:22:23.143939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:27:31.584 [2024-12-09 05:22:23.147156] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:31.842 [2024-12-09 05:22:23.272656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:31.842 [2024-12-09 05:22:23.274313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:31.842 [2024-12-09 05:22:23.439996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:32.100 145.33 IOPS, 436.00 MiB/s [2024-12-09T05:22:23.717Z] [2024-12-09 05:22:23.693164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 01:27:32.359 [2024-12-09 05:22:23.906638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:27:32.359 [2024-12-09 05:22:23.907726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:32.617 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:32.617 "name": "raid_bdev1", 01:27:32.617 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:32.617 "strip_size_kb": 0, 01:27:32.617 "state": "online", 01:27:32.617 "raid_level": "raid1", 01:27:32.617 "superblock": false, 01:27:32.617 "num_base_bdevs": 2, 01:27:32.617 "num_base_bdevs_discovered": 2, 01:27:32.617 "num_base_bdevs_operational": 2, 01:27:32.617 "process": { 01:27:32.617 "type": "rebuild", 01:27:32.617 "target": "spare", 01:27:32.617 "progress": { 01:27:32.617 "blocks": 10240, 01:27:32.617 "percent": 15 01:27:32.617 } 01:27:32.617 }, 01:27:32.617 "base_bdevs_list": [ 01:27:32.617 { 01:27:32.617 "name": "spare", 01:27:32.617 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:32.617 "is_configured": true, 01:27:32.617 "data_offset": 0, 01:27:32.617 "data_size": 65536 01:27:32.617 }, 01:27:32.617 { 01:27:32.617 "name": "BaseBdev2", 01:27:32.617 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:32.618 "is_configured": true, 01:27:32.618 "data_offset": 0, 01:27:32.618 "data_size": 65536 01:27:32.618 } 01:27:32.618 ] 01:27:32.618 }' 01:27:32.618 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=446 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:32.884 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:32.885 "name": "raid_bdev1", 01:27:32.885 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:32.885 "strip_size_kb": 0, 01:27:32.885 "state": "online", 01:27:32.885 "raid_level": "raid1", 01:27:32.885 "superblock": false, 01:27:32.885 "num_base_bdevs": 2, 01:27:32.885 "num_base_bdevs_discovered": 2, 01:27:32.885 "num_base_bdevs_operational": 2, 01:27:32.885 "process": { 01:27:32.885 "type": "rebuild", 01:27:32.885 "target": "spare", 01:27:32.885 "progress": { 01:27:32.885 "blocks": 14336, 01:27:32.885 "percent": 21 01:27:32.885 } 01:27:32.885 }, 01:27:32.885 "base_bdevs_list": [ 01:27:32.885 { 01:27:32.885 "name": "spare", 01:27:32.885 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:32.885 "is_configured": true, 01:27:32.885 "data_offset": 0, 01:27:32.885 "data_size": 65536 01:27:32.885 }, 01:27:32.885 { 01:27:32.885 "name": "BaseBdev2", 01:27:32.885 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:32.885 "is_configured": true, 01:27:32.885 "data_offset": 0, 01:27:32.885 "data_size": 65536 01:27:32.885 } 01:27:32.885 ] 01:27:32.885 }' 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:32.885 [2024-12-09 05:22:24.366266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:32.885 05:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:33.159 132.50 IOPS, 397.50 MiB/s [2024-12-09T05:22:24.777Z] [2024-12-09 05:22:24.620898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 01:27:33.160 [2024-12-09 05:22:24.741053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 01:27:33.724 [2024-12-09 05:22:25.065394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:33.981 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:33.982 [2024-12-09 05:22:25.495639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 01:27:33.982 [2024-12-09 05:22:25.496015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 01:27:33.982 119.00 IOPS, 357.00 MiB/s [2024-12-09T05:22:25.599Z] 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:33.982 "name": "raid_bdev1", 01:27:33.982 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:33.982 "strip_size_kb": 0, 01:27:33.982 "state": "online", 01:27:33.982 "raid_level": "raid1", 01:27:33.982 "superblock": false, 01:27:33.982 "num_base_bdevs": 2, 01:27:33.982 "num_base_bdevs_discovered": 2, 01:27:33.982 "num_base_bdevs_operational": 2, 01:27:33.982 "process": { 01:27:33.982 "type": "rebuild", 01:27:33.982 "target": "spare", 01:27:33.982 "progress": { 01:27:33.982 "blocks": 32768, 01:27:33.982 "percent": 50 01:27:33.982 } 01:27:33.982 }, 01:27:33.982 "base_bdevs_list": [ 01:27:33.982 { 01:27:33.982 "name": "spare", 01:27:33.982 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:33.982 "is_configured": true, 01:27:33.982 "data_offset": 0, 01:27:33.982 "data_size": 65536 01:27:33.982 }, 01:27:33.982 { 01:27:33.982 "name": "BaseBdev2", 01:27:33.982 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:33.982 "is_configured": true, 01:27:33.982 "data_offset": 0, 01:27:33.982 "data_size": 65536 01:27:33.982 } 01:27:33.982 ] 01:27:33.982 }' 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:33.982 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:34.240 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:34.240 05:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:35.176 105.67 IOPS, 317.00 MiB/s [2024-12-09T05:22:26.793Z] 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:35.176 "name": "raid_bdev1", 01:27:35.176 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:35.176 "strip_size_kb": 0, 01:27:35.176 "state": "online", 01:27:35.176 "raid_level": "raid1", 01:27:35.176 "superblock": false, 01:27:35.176 "num_base_bdevs": 2, 01:27:35.176 "num_base_bdevs_discovered": 2, 01:27:35.176 "num_base_bdevs_operational": 2, 01:27:35.176 "process": { 01:27:35.176 "type": "rebuild", 01:27:35.176 "target": "spare", 01:27:35.176 "progress": { 01:27:35.176 "blocks": 51200, 01:27:35.176 "percent": 78 01:27:35.176 } 01:27:35.176 }, 01:27:35.176 "base_bdevs_list": [ 01:27:35.176 { 01:27:35.176 "name": "spare", 01:27:35.176 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:35.176 "is_configured": true, 01:27:35.176 "data_offset": 0, 01:27:35.176 "data_size": 65536 01:27:35.176 }, 01:27:35.176 { 01:27:35.176 "name": "BaseBdev2", 01:27:35.176 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:35.176 "is_configured": true, 01:27:35.176 "data_offset": 0, 01:27:35.176 "data_size": 65536 01:27:35.176 } 01:27:35.176 ] 01:27:35.176 }' 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:35.176 05:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:35.743 [2024-12-09 05:22:27.286850] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:27:36.000 [2024-12-09 05:22:27.393079] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:27:36.000 [2024-12-09 05:22:27.396305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:36.258 94.71 IOPS, 284.14 MiB/s [2024-12-09T05:22:27.875Z] 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:36.258 "name": "raid_bdev1", 01:27:36.258 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:36.258 "strip_size_kb": 0, 01:27:36.258 "state": "online", 01:27:36.258 "raid_level": "raid1", 01:27:36.258 "superblock": false, 01:27:36.258 "num_base_bdevs": 2, 01:27:36.258 "num_base_bdevs_discovered": 2, 01:27:36.258 "num_base_bdevs_operational": 2, 01:27:36.258 "base_bdevs_list": [ 01:27:36.258 { 01:27:36.258 "name": "spare", 01:27:36.258 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:36.258 "is_configured": true, 01:27:36.258 "data_offset": 0, 01:27:36.258 "data_size": 65536 01:27:36.258 }, 01:27:36.258 { 01:27:36.258 "name": "BaseBdev2", 01:27:36.258 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:36.258 "is_configured": true, 01:27:36.258 "data_offset": 0, 01:27:36.258 "data_size": 65536 01:27:36.258 } 01:27:36.258 ] 01:27:36.258 }' 01:27:36.258 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:36.516 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:27:36.516 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:36.516 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:27:36.516 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:36.517 "name": "raid_bdev1", 01:27:36.517 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:36.517 "strip_size_kb": 0, 01:27:36.517 "state": "online", 01:27:36.517 "raid_level": "raid1", 01:27:36.517 "superblock": false, 01:27:36.517 "num_base_bdevs": 2, 01:27:36.517 "num_base_bdevs_discovered": 2, 01:27:36.517 "num_base_bdevs_operational": 2, 01:27:36.517 "base_bdevs_list": [ 01:27:36.517 { 01:27:36.517 "name": "spare", 01:27:36.517 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:36.517 "is_configured": true, 01:27:36.517 "data_offset": 0, 01:27:36.517 "data_size": 65536 01:27:36.517 }, 01:27:36.517 { 01:27:36.517 "name": "BaseBdev2", 01:27:36.517 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:36.517 "is_configured": true, 01:27:36.517 "data_offset": 0, 01:27:36.517 "data_size": 65536 01:27:36.517 } 01:27:36.517 ] 01:27:36.517 }' 01:27:36.517 05:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:36.517 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:36.775 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:36.775 "name": "raid_bdev1", 01:27:36.775 "uuid": "274caafa-3f63-44e6-b557-7ad62006fc41", 01:27:36.775 "strip_size_kb": 0, 01:27:36.775 "state": "online", 01:27:36.775 "raid_level": "raid1", 01:27:36.775 "superblock": false, 01:27:36.775 "num_base_bdevs": 2, 01:27:36.775 "num_base_bdevs_discovered": 2, 01:27:36.775 "num_base_bdevs_operational": 2, 01:27:36.775 "base_bdevs_list": [ 01:27:36.775 { 01:27:36.775 "name": "spare", 01:27:36.775 "uuid": "9d2908ab-d058-5261-af79-29008ccd9976", 01:27:36.775 "is_configured": true, 01:27:36.775 "data_offset": 0, 01:27:36.775 "data_size": 65536 01:27:36.775 }, 01:27:36.775 { 01:27:36.775 "name": "BaseBdev2", 01:27:36.775 "uuid": "9a45eb78-501a-5925-ae54-cf207a14e4dc", 01:27:36.775 "is_configured": true, 01:27:36.775 "data_offset": 0, 01:27:36.775 "data_size": 65536 01:27:36.775 } 01:27:36.775 ] 01:27:36.775 }' 01:27:36.775 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:36.775 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:37.033 87.62 IOPS, 262.88 MiB/s [2024-12-09T05:22:28.650Z] 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:27:37.033 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:37.033 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:37.033 [2024-12-09 05:22:28.627429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:27:37.033 [2024-12-09 05:22:28.627564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:27:37.291 01:27:37.291 Latency(us) 01:27:37.291 [2024-12-09T05:22:28.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:37.291 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 01:27:37.291 raid_bdev1 : 8.27 86.25 258.74 0.00 0.00 16863.31 283.00 112483.61 01:27:37.291 [2024-12-09T05:22:28.908Z] =================================================================================================================== 01:27:37.291 [2024-12-09T05:22:28.908Z] Total : 86.25 258.74 0.00 0.00 16863.31 283.00 112483.61 01:27:37.291 [2024-12-09 05:22:28.759027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:27:37.291 [2024-12-09 05:22:28.759166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:37.291 [2024-12-09 05:22:28.759282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:27:37.291 [2024-12-09 05:22:28.759307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:27:37.291 { 01:27:37.291 "results": [ 01:27:37.291 { 01:27:37.291 "job": "raid_bdev1", 01:27:37.291 "core_mask": "0x1", 01:27:37.291 "workload": "randrw", 01:27:37.291 "percentage": 50, 01:27:37.291 "status": "finished", 01:27:37.291 "queue_depth": 2, 01:27:37.291 "io_size": 3145728, 01:27:37.291 "runtime": 8.267034, 01:27:37.291 "iops": 86.24616760981024, 01:27:37.291 "mibps": 258.73850282943073, 01:27:37.291 "io_failed": 0, 01:27:37.291 "io_timeout": 0, 01:27:37.291 "avg_latency_us": 16863.30720387607, 01:27:37.291 "min_latency_us": 282.99636363636364, 01:27:37.291 "max_latency_us": 112483.60727272727 01:27:37.291 } 01:27:37.292 ], 01:27:37.292 "core_count": 1 01:27:37.292 } 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:37.292 05:22:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 01:27:37.550 /dev/nbd0 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:37.550 1+0 records in 01:27:37.550 1+0 records out 01:27:37.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392587 s, 10.4 MB/s 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:37.550 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:37.551 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 01:27:38.117 /dev/nbd1 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:38.117 1+0 records in 01:27:38.117 1+0 records out 01:27:38.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464794 s, 8.8 MB/s 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:38.117 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:27:38.375 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:27:38.376 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:27:38.376 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:27:38.376 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:38.376 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:38.376 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 01:27:38.634 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:38.635 05:22:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:27:38.635 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76621 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76621 ']' 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76621 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76621 01:27:38.893 killing process with pid 76621 01:27:38.893 Received shutdown signal, test time was about 9.811922 seconds 01:27:38.893 01:27:38.893 Latency(us) 01:27:38.893 [2024-12-09T05:22:30.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:38.893 [2024-12-09T05:22:30.510Z] =================================================================================================================== 01:27:38.893 [2024-12-09T05:22:30.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76621' 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76621 01:27:38.893 [2024-12-09 05:22:30.285349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:27:38.893 05:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76621 01:27:38.893 [2024-12-09 05:22:30.492233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 01:27:40.307 01:27:40.307 real 0m13.294s 01:27:40.307 user 0m17.183s 01:27:40.307 sys 0m1.566s 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:27:40.307 ************************************ 01:27:40.307 END TEST raid_rebuild_test_io 01:27:40.307 ************************************ 01:27:40.307 05:22:31 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 01:27:40.307 05:22:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:27:40.307 05:22:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:40.307 05:22:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:27:40.307 ************************************ 01:27:40.307 START TEST raid_rebuild_test_sb_io 01:27:40.307 ************************************ 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77009 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77009 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77009 ']' 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:40.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:40.307 05:22:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:40.307 [2024-12-09 05:22:31.903773] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:40.307 [2024-12-09 05:22:31.903989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77009 ] 01:27:40.307 I/O size of 3145728 is greater than zero copy threshold (65536). 01:27:40.307 Zero copy mechanism will not be used. 01:27:40.566 [2024-12-09 05:22:32.088033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:40.825 [2024-12-09 05:22:32.200467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:40.825 [2024-12-09 05:22:32.402764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:27:40.825 [2024-12-09 05:22:32.402898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.393 BaseBdev1_malloc 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.393 [2024-12-09 05:22:32.950396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:27:41.393 [2024-12-09 05:22:32.950718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:41.393 [2024-12-09 05:22:32.950759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:27:41.393 [2024-12-09 05:22:32.950779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:41.393 [2024-12-09 05:22:32.953772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:41.393 [2024-12-09 05:22:32.954042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:27:41.393 BaseBdev1 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.393 BaseBdev2_malloc 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.393 05:22:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.393 [2024-12-09 05:22:32.996969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:27:41.393 [2024-12-09 05:22:32.997065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:41.393 [2024-12-09 05:22:32.997101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:27:41.393 [2024-12-09 05:22:32.997118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:41.393 [2024-12-09 05:22:33.000142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:41.393 [2024-12-09 05:22:33.000202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:27:41.393 BaseBdev2 01:27:41.393 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.393 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:27:41.393 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.393 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.653 spare_malloc 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.653 spare_delay 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.653 [2024-12-09 05:22:33.069359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:41.653 [2024-12-09 05:22:33.069484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:41.653 [2024-12-09 05:22:33.069546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:27:41.653 [2024-12-09 05:22:33.069566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:41.653 [2024-12-09 05:22:33.072379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:41.653 [2024-12-09 05:22:33.072615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:41.653 spare 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.653 [2024-12-09 05:22:33.081483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:27:41.653 [2024-12-09 05:22:33.083987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:27:41.653 [2024-12-09 05:22:33.084189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:27:41.653 [2024-12-09 05:22:33.084211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:27:41.653 [2024-12-09 05:22:33.084515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:27:41.653 [2024-12-09 05:22:33.084804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:27:41.653 [2024-12-09 05:22:33.084819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:27:41.653 [2024-12-09 05:22:33.085018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:41.653 "name": "raid_bdev1", 01:27:41.653 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:41.653 "strip_size_kb": 0, 01:27:41.653 "state": "online", 01:27:41.653 "raid_level": "raid1", 01:27:41.653 "superblock": true, 01:27:41.653 "num_base_bdevs": 2, 01:27:41.653 "num_base_bdevs_discovered": 2, 01:27:41.653 "num_base_bdevs_operational": 2, 01:27:41.653 "base_bdevs_list": [ 01:27:41.653 { 01:27:41.653 "name": "BaseBdev1", 01:27:41.653 "uuid": "b0c8a07e-2570-5624-a91f-f20bf0cf2e8e", 01:27:41.653 "is_configured": true, 01:27:41.653 "data_offset": 2048, 01:27:41.653 "data_size": 63488 01:27:41.653 }, 01:27:41.653 { 01:27:41.653 "name": "BaseBdev2", 01:27:41.653 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:41.653 "is_configured": true, 01:27:41.653 "data_offset": 2048, 01:27:41.653 "data_size": 63488 01:27:41.653 } 01:27:41.653 ] 01:27:41.653 }' 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:41.653 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:27:42.220 [2024-12-09 05:22:33.586088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.220 [2024-12-09 05:22:33.689749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:42.220 "name": "raid_bdev1", 01:27:42.220 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:42.220 "strip_size_kb": 0, 01:27:42.220 "state": "online", 01:27:42.220 "raid_level": "raid1", 01:27:42.220 "superblock": true, 01:27:42.220 "num_base_bdevs": 2, 01:27:42.220 "num_base_bdevs_discovered": 1, 01:27:42.220 "num_base_bdevs_operational": 1, 01:27:42.220 "base_bdevs_list": [ 01:27:42.220 { 01:27:42.220 "name": null, 01:27:42.220 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:42.220 "is_configured": false, 01:27:42.220 "data_offset": 0, 01:27:42.220 "data_size": 63488 01:27:42.220 }, 01:27:42.220 { 01:27:42.220 "name": "BaseBdev2", 01:27:42.220 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:42.220 "is_configured": true, 01:27:42.220 "data_offset": 2048, 01:27:42.220 "data_size": 63488 01:27:42.220 } 01:27:42.220 ] 01:27:42.220 }' 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:42.220 05:22:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.220 [2024-12-09 05:22:33.798830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:27:42.220 I/O size of 3145728 is greater than zero copy threshold (65536). 01:27:42.220 Zero copy mechanism will not be used. 01:27:42.220 Running I/O for 60 seconds... 01:27:42.787 05:22:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:42.787 05:22:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:42.787 05:22:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:42.787 [2024-12-09 05:22:34.227306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:42.787 05:22:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:42.787 05:22:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 01:27:42.787 [2024-12-09 05:22:34.277111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:27:42.787 [2024-12-09 05:22:34.279589] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:42.787 [2024-12-09 05:22:34.388981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:42.787 [2024-12-09 05:22:34.389577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:43.045 [2024-12-09 05:22:34.605627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:43.045 [2024-12-09 05:22:34.605968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:43.571 155.00 IOPS, 465.00 MiB/s [2024-12-09T05:22:35.188Z] [2024-12-09 05:22:35.076464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:43.832 "name": "raid_bdev1", 01:27:43.832 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:43.832 "strip_size_kb": 0, 01:27:43.832 "state": "online", 01:27:43.832 "raid_level": "raid1", 01:27:43.832 "superblock": true, 01:27:43.832 "num_base_bdevs": 2, 01:27:43.832 "num_base_bdevs_discovered": 2, 01:27:43.832 "num_base_bdevs_operational": 2, 01:27:43.832 "process": { 01:27:43.832 "type": "rebuild", 01:27:43.832 "target": "spare", 01:27:43.832 "progress": { 01:27:43.832 "blocks": 10240, 01:27:43.832 "percent": 16 01:27:43.832 } 01:27:43.832 }, 01:27:43.832 "base_bdevs_list": [ 01:27:43.832 { 01:27:43.832 "name": "spare", 01:27:43.832 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:43.832 "is_configured": true, 01:27:43.832 "data_offset": 2048, 01:27:43.832 "data_size": 63488 01:27:43.832 }, 01:27:43.832 { 01:27:43.832 "name": "BaseBdev2", 01:27:43.832 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:43.832 "is_configured": true, 01:27:43.832 "data_offset": 2048, 01:27:43.832 "data_size": 63488 01:27:43.832 } 01:27:43.832 ] 01:27:43.832 }' 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:43.832 [2024-12-09 05:22:35.420307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:43.832 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:43.832 [2024-12-09 05:22:35.426171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:44.091 [2024-12-09 05:22:35.535783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 01:27:44.091 [2024-12-09 05:22:35.637011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:44.091 [2024-12-09 05:22:35.639277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:44.091 [2024-12-09 05:22:35.639331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:44.091 [2024-12-09 05:22:35.639344] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:44.091 [2024-12-09 05:22:35.673072] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:44.091 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:44.350 "name": "raid_bdev1", 01:27:44.350 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:44.350 "strip_size_kb": 0, 01:27:44.350 "state": "online", 01:27:44.350 "raid_level": "raid1", 01:27:44.350 "superblock": true, 01:27:44.350 "num_base_bdevs": 2, 01:27:44.350 "num_base_bdevs_discovered": 1, 01:27:44.350 "num_base_bdevs_operational": 1, 01:27:44.350 "base_bdevs_list": [ 01:27:44.350 { 01:27:44.350 "name": null, 01:27:44.350 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:44.350 "is_configured": false, 01:27:44.350 "data_offset": 0, 01:27:44.350 "data_size": 63488 01:27:44.350 }, 01:27:44.350 { 01:27:44.350 "name": "BaseBdev2", 01:27:44.350 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:44.350 "is_configured": true, 01:27:44.350 "data_offset": 2048, 01:27:44.350 "data_size": 63488 01:27:44.350 } 01:27:44.350 ] 01:27:44.350 }' 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:44.350 05:22:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:44.608 128.50 IOPS, 385.50 MiB/s [2024-12-09T05:22:36.225Z] 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:44.608 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:44.608 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:44.608 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:44.608 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:44.865 "name": "raid_bdev1", 01:27:44.865 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:44.865 "strip_size_kb": 0, 01:27:44.865 "state": "online", 01:27:44.865 "raid_level": "raid1", 01:27:44.865 "superblock": true, 01:27:44.865 "num_base_bdevs": 2, 01:27:44.865 "num_base_bdevs_discovered": 1, 01:27:44.865 "num_base_bdevs_operational": 1, 01:27:44.865 "base_bdevs_list": [ 01:27:44.865 { 01:27:44.865 "name": null, 01:27:44.865 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:44.865 "is_configured": false, 01:27:44.865 "data_offset": 0, 01:27:44.865 "data_size": 63488 01:27:44.865 }, 01:27:44.865 { 01:27:44.865 "name": "BaseBdev2", 01:27:44.865 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:44.865 "is_configured": true, 01:27:44.865 "data_offset": 2048, 01:27:44.865 "data_size": 63488 01:27:44.865 } 01:27:44.865 ] 01:27:44.865 }' 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:44.865 [2024-12-09 05:22:36.390504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:44.865 05:22:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 01:27:44.865 [2024-12-09 05:22:36.457556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:27:44.865 [2024-12-09 05:22:36.460679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:45.122 [2024-12-09 05:22:36.571505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:45.122 [2024-12-09 05:22:36.572432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:27:45.380 146.00 IOPS, 438.00 MiB/s [2024-12-09T05:22:36.997Z] [2024-12-09 05:22:36.805096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:45.380 [2024-12-09 05:22:36.805547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:27:45.639 [2024-12-09 05:22:37.143591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 01:27:45.898 [2024-12-09 05:22:37.278794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:27:45.898 [2024-12-09 05:22:37.279168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:45.898 "name": "raid_bdev1", 01:27:45.898 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:45.898 "strip_size_kb": 0, 01:27:45.898 "state": "online", 01:27:45.898 "raid_level": "raid1", 01:27:45.898 "superblock": true, 01:27:45.898 "num_base_bdevs": 2, 01:27:45.898 "num_base_bdevs_discovered": 2, 01:27:45.898 "num_base_bdevs_operational": 2, 01:27:45.898 "process": { 01:27:45.898 "type": "rebuild", 01:27:45.898 "target": "spare", 01:27:45.898 "progress": { 01:27:45.898 "blocks": 10240, 01:27:45.898 "percent": 16 01:27:45.898 } 01:27:45.898 }, 01:27:45.898 "base_bdevs_list": [ 01:27:45.898 { 01:27:45.898 "name": "spare", 01:27:45.898 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:45.898 "is_configured": true, 01:27:45.898 "data_offset": 2048, 01:27:45.898 "data_size": 63488 01:27:45.898 }, 01:27:45.898 { 01:27:45.898 "name": "BaseBdev2", 01:27:45.898 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:45.898 "is_configured": true, 01:27:45.898 "data_offset": 2048, 01:27:45.898 "data_size": 63488 01:27:45.898 } 01:27:45.898 ] 01:27:45.898 }' 01:27:45.898 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:27:46.167 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=459 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:46.167 [2024-12-09 05:22:37.633019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 01:27:46.167 [2024-12-09 05:22:37.633615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:46.167 "name": "raid_bdev1", 01:27:46.167 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:46.167 "strip_size_kb": 0, 01:27:46.167 "state": "online", 01:27:46.167 "raid_level": "raid1", 01:27:46.167 "superblock": true, 01:27:46.167 "num_base_bdevs": 2, 01:27:46.167 "num_base_bdevs_discovered": 2, 01:27:46.167 "num_base_bdevs_operational": 2, 01:27:46.167 "process": { 01:27:46.167 "type": "rebuild", 01:27:46.167 "target": "spare", 01:27:46.167 "progress": { 01:27:46.167 "blocks": 12288, 01:27:46.167 "percent": 19 01:27:46.167 } 01:27:46.167 }, 01:27:46.167 "base_bdevs_list": [ 01:27:46.167 { 01:27:46.167 "name": "spare", 01:27:46.167 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:46.167 "is_configured": true, 01:27:46.167 "data_offset": 2048, 01:27:46.167 "data_size": 63488 01:27:46.167 }, 01:27:46.167 { 01:27:46.167 "name": "BaseBdev2", 01:27:46.167 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:46.167 "is_configured": true, 01:27:46.167 "data_offset": 2048, 01:27:46.167 "data_size": 63488 01:27:46.167 } 01:27:46.167 ] 01:27:46.167 }' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:46.167 05:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:46.443 128.00 IOPS, 384.00 MiB/s [2024-12-09T05:22:38.060Z] [2024-12-09 05:22:37.851163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 01:27:46.443 [2024-12-09 05:22:37.851611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 01:27:46.701 [2024-12-09 05:22:38.266942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 01:27:46.959 [2024-12-09 05:22:38.479890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 01:27:46.959 [2024-12-09 05:22:38.480810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 01:27:47.217 [2024-12-09 05:22:38.590586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:47.217 118.80 IOPS, 356.40 MiB/s [2024-12-09T05:22:38.834Z] 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:47.217 "name": "raid_bdev1", 01:27:47.217 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:47.217 "strip_size_kb": 0, 01:27:47.217 "state": "online", 01:27:47.217 "raid_level": "raid1", 01:27:47.217 "superblock": true, 01:27:47.217 "num_base_bdevs": 2, 01:27:47.217 "num_base_bdevs_discovered": 2, 01:27:47.217 "num_base_bdevs_operational": 2, 01:27:47.217 "process": { 01:27:47.217 "type": "rebuild", 01:27:47.217 "target": "spare", 01:27:47.217 "progress": { 01:27:47.217 "blocks": 30720, 01:27:47.217 "percent": 48 01:27:47.217 } 01:27:47.217 }, 01:27:47.217 "base_bdevs_list": [ 01:27:47.217 { 01:27:47.217 "name": "spare", 01:27:47.217 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:47.217 "is_configured": true, 01:27:47.217 "data_offset": 2048, 01:27:47.217 "data_size": 63488 01:27:47.217 }, 01:27:47.217 { 01:27:47.217 "name": "BaseBdev2", 01:27:47.217 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:47.217 "is_configured": true, 01:27:47.217 "data_offset": 2048, 01:27:47.217 "data_size": 63488 01:27:47.217 } 01:27:47.217 ] 01:27:47.217 }' 01:27:47.217 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:47.476 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:47.476 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:47.476 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:47.476 05:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:47.734 [2024-12-09 05:22:39.157944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 01:27:48.577 107.50 IOPS, 322.50 MiB/s [2024-12-09T05:22:40.194Z] 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:48.577 "name": "raid_bdev1", 01:27:48.577 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:48.577 "strip_size_kb": 0, 01:27:48.577 "state": "online", 01:27:48.577 "raid_level": "raid1", 01:27:48.577 "superblock": true, 01:27:48.577 "num_base_bdevs": 2, 01:27:48.577 "num_base_bdevs_discovered": 2, 01:27:48.577 "num_base_bdevs_operational": 2, 01:27:48.577 "process": { 01:27:48.577 "type": "rebuild", 01:27:48.577 "target": "spare", 01:27:48.577 "progress": { 01:27:48.577 "blocks": 51200, 01:27:48.577 "percent": 80 01:27:48.577 } 01:27:48.577 }, 01:27:48.577 "base_bdevs_list": [ 01:27:48.577 { 01:27:48.577 "name": "spare", 01:27:48.577 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:48.577 "is_configured": true, 01:27:48.577 "data_offset": 2048, 01:27:48.577 "data_size": 63488 01:27:48.577 }, 01:27:48.577 { 01:27:48.577 "name": "BaseBdev2", 01:27:48.577 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:48.577 "is_configured": true, 01:27:48.577 "data_offset": 2048, 01:27:48.577 "data_size": 63488 01:27:48.577 } 01:27:48.577 ] 01:27:48.577 }' 01:27:48.577 05:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:48.577 05:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:48.577 05:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:48.578 05:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:48.578 05:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:27:49.144 [2024-12-09 05:22:40.557845] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:27:49.144 [2024-12-09 05:22:40.657879] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:27:49.144 [2024-12-09 05:22:40.669346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:49.661 97.86 IOPS, 293.57 MiB/s [2024-12-09T05:22:41.278Z] 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:27:49.661 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:49.661 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:49.661 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:49.661 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:49.661 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:49.662 "name": "raid_bdev1", 01:27:49.662 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:49.662 "strip_size_kb": 0, 01:27:49.662 "state": "online", 01:27:49.662 "raid_level": "raid1", 01:27:49.662 "superblock": true, 01:27:49.662 "num_base_bdevs": 2, 01:27:49.662 "num_base_bdevs_discovered": 2, 01:27:49.662 "num_base_bdevs_operational": 2, 01:27:49.662 "base_bdevs_list": [ 01:27:49.662 { 01:27:49.662 "name": "spare", 01:27:49.662 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:49.662 "is_configured": true, 01:27:49.662 "data_offset": 2048, 01:27:49.662 "data_size": 63488 01:27:49.662 }, 01:27:49.662 { 01:27:49.662 "name": "BaseBdev2", 01:27:49.662 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:49.662 "is_configured": true, 01:27:49.662 "data_offset": 2048, 01:27:49.662 "data_size": 63488 01:27:49.662 } 01:27:49.662 ] 01:27:49.662 }' 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:27:49.662 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:49.920 "name": "raid_bdev1", 01:27:49.920 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:49.920 "strip_size_kb": 0, 01:27:49.920 "state": "online", 01:27:49.920 "raid_level": "raid1", 01:27:49.920 "superblock": true, 01:27:49.920 "num_base_bdevs": 2, 01:27:49.920 "num_base_bdevs_discovered": 2, 01:27:49.920 "num_base_bdevs_operational": 2, 01:27:49.920 "base_bdevs_list": [ 01:27:49.920 { 01:27:49.920 "name": "spare", 01:27:49.920 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:49.920 "is_configured": true, 01:27:49.920 "data_offset": 2048, 01:27:49.920 "data_size": 63488 01:27:49.920 }, 01:27:49.920 { 01:27:49.920 "name": "BaseBdev2", 01:27:49.920 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:49.920 "is_configured": true, 01:27:49.920 "data_offset": 2048, 01:27:49.920 "data_size": 63488 01:27:49.920 } 01:27:49.920 ] 01:27:49.920 }' 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:49.920 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.921 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:49.921 "name": "raid_bdev1", 01:27:49.921 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:49.921 "strip_size_kb": 0, 01:27:49.921 "state": "online", 01:27:49.921 "raid_level": "raid1", 01:27:49.921 "superblock": true, 01:27:49.921 "num_base_bdevs": 2, 01:27:49.921 "num_base_bdevs_discovered": 2, 01:27:49.921 "num_base_bdevs_operational": 2, 01:27:49.921 "base_bdevs_list": [ 01:27:49.921 { 01:27:49.921 "name": "spare", 01:27:49.921 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:49.921 "is_configured": true, 01:27:49.921 "data_offset": 2048, 01:27:49.921 "data_size": 63488 01:27:49.921 }, 01:27:49.921 { 01:27:49.921 "name": "BaseBdev2", 01:27:49.921 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:49.921 "is_configured": true, 01:27:49.921 "data_offset": 2048, 01:27:49.921 "data_size": 63488 01:27:49.921 } 01:27:49.921 ] 01:27:49.921 }' 01:27:49.921 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:49.921 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:50.488 92.25 IOPS, 276.75 MiB/s [2024-12-09T05:22:42.105Z] 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:27:50.488 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.488 05:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:50.488 [2024-12-09 05:22:41.981261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:27:50.488 [2024-12-09 05:22:41.981317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:27:50.488 01:27:50.488 Latency(us) 01:27:50.488 [2024-12-09T05:22:42.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:50.488 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 01:27:50.488 raid_bdev1 : 8.22 90.40 271.19 0.00 0.00 14543.90 255.07 116296.61 01:27:50.488 [2024-12-09T05:22:42.105Z] =================================================================================================================== 01:27:50.488 [2024-12-09T05:22:42.105Z] Total : 90.40 271.19 0.00 0.00 14543.90 255.07 116296.61 01:27:50.488 [2024-12-09 05:22:42.042039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:27:50.488 [2024-12-09 05:22:42.042138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:50.488 { 01:27:50.488 "results": [ 01:27:50.488 { 01:27:50.488 "job": "raid_bdev1", 01:27:50.488 "core_mask": "0x1", 01:27:50.488 "workload": "randrw", 01:27:50.488 "percentage": 50, 01:27:50.488 "status": "finished", 01:27:50.488 "queue_depth": 2, 01:27:50.488 "io_size": 3145728, 01:27:50.488 "runtime": 8.219313, 01:27:50.488 "iops": 90.3968494690493, 01:27:50.488 "mibps": 271.1905484071479, 01:27:50.488 "io_failed": 0, 01:27:50.488 "io_timeout": 0, 01:27:50.488 "avg_latency_us": 14543.900942126515, 01:27:50.488 "min_latency_us": 255.0690909090909, 01:27:50.488 "max_latency_us": 116296.61090909092 01:27:50.488 } 01:27:50.488 ], 01:27:50.488 "core_count": 1 01:27:50.488 } 01:27:50.488 [2024-12-09 05:22:42.042241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:27:50.488 [2024-12-09 05:22:42.042261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:27:50.488 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.488 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:50.488 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 01:27:50.488 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.488 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:50.488 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:50.746 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 01:27:51.005 /dev/nbd0 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:51.005 1+0 records in 01:27:51.005 1+0 records out 01:27:51.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441638 s, 9.3 MB/s 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:51.005 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 01:27:51.264 /dev/nbd1 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:27:51.264 1+0 records in 01:27:51.264 1+0 records out 01:27:51.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344829 s, 11.9 MB/s 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:27:51.264 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:51.522 05:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:27:51.780 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.039 [2024-12-09 05:22:43.532242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:52.039 [2024-12-09 05:22:43.532341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:52.039 [2024-12-09 05:22:43.532421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 01:27:52.039 [2024-12-09 05:22:43.532441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:52.039 [2024-12-09 05:22:43.535603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:52.039 [2024-12-09 05:22:43.535651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:52.039 [2024-12-09 05:22:43.535763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:27:52.039 [2024-12-09 05:22:43.535846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:52.039 [2024-12-09 05:22:43.536043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:27:52.039 spare 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.039 [2024-12-09 05:22:43.636185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:27:52.039 [2024-12-09 05:22:43.636229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:27:52.039 [2024-12-09 05:22:43.636670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 01:27:52.039 [2024-12-09 05:22:43.636946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:27:52.039 [2024-12-09 05:22:43.636980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:27:52.039 [2024-12-09 05:22:43.637228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.039 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:52.297 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.297 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:52.297 "name": "raid_bdev1", 01:27:52.297 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:52.297 "strip_size_kb": 0, 01:27:52.297 "state": "online", 01:27:52.297 "raid_level": "raid1", 01:27:52.297 "superblock": true, 01:27:52.297 "num_base_bdevs": 2, 01:27:52.297 "num_base_bdevs_discovered": 2, 01:27:52.297 "num_base_bdevs_operational": 2, 01:27:52.297 "base_bdevs_list": [ 01:27:52.297 { 01:27:52.297 "name": "spare", 01:27:52.297 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:52.297 "is_configured": true, 01:27:52.297 "data_offset": 2048, 01:27:52.297 "data_size": 63488 01:27:52.297 }, 01:27:52.297 { 01:27:52.297 "name": "BaseBdev2", 01:27:52.298 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:52.298 "is_configured": true, 01:27:52.298 "data_offset": 2048, 01:27:52.298 "data_size": 63488 01:27:52.298 } 01:27:52.298 ] 01:27:52.298 }' 01:27:52.298 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:52.298 05:22:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.556 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:52.556 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:52.556 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:52.556 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:52.556 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:52.814 "name": "raid_bdev1", 01:27:52.814 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:52.814 "strip_size_kb": 0, 01:27:52.814 "state": "online", 01:27:52.814 "raid_level": "raid1", 01:27:52.814 "superblock": true, 01:27:52.814 "num_base_bdevs": 2, 01:27:52.814 "num_base_bdevs_discovered": 2, 01:27:52.814 "num_base_bdevs_operational": 2, 01:27:52.814 "base_bdevs_list": [ 01:27:52.814 { 01:27:52.814 "name": "spare", 01:27:52.814 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:52.814 "is_configured": true, 01:27:52.814 "data_offset": 2048, 01:27:52.814 "data_size": 63488 01:27:52.814 }, 01:27:52.814 { 01:27:52.814 "name": "BaseBdev2", 01:27:52.814 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:52.814 "is_configured": true, 01:27:52.814 "data_offset": 2048, 01:27:52.814 "data_size": 63488 01:27:52.814 } 01:27:52.814 ] 01:27:52.814 }' 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.814 [2024-12-09 05:22:44.385579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:52.814 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.073 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:53.073 "name": "raid_bdev1", 01:27:53.073 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:53.073 "strip_size_kb": 0, 01:27:53.073 "state": "online", 01:27:53.073 "raid_level": "raid1", 01:27:53.073 "superblock": true, 01:27:53.073 "num_base_bdevs": 2, 01:27:53.073 "num_base_bdevs_discovered": 1, 01:27:53.073 "num_base_bdevs_operational": 1, 01:27:53.073 "base_bdevs_list": [ 01:27:53.073 { 01:27:53.073 "name": null, 01:27:53.073 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:53.073 "is_configured": false, 01:27:53.073 "data_offset": 0, 01:27:53.073 "data_size": 63488 01:27:53.073 }, 01:27:53.073 { 01:27:53.073 "name": "BaseBdev2", 01:27:53.073 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:53.073 "is_configured": true, 01:27:53.073 "data_offset": 2048, 01:27:53.073 "data_size": 63488 01:27:53.073 } 01:27:53.073 ] 01:27:53.073 }' 01:27:53.073 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:53.073 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:53.331 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:27:53.331 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.331 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:53.331 [2024-12-09 05:22:44.905891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:53.331 [2024-12-09 05:22:44.906205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:27:53.331 [2024-12-09 05:22:44.906240] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:27:53.331 [2024-12-09 05:22:44.906304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:53.331 [2024-12-09 05:22:44.922112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 01:27:53.331 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.331 05:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 01:27:53.331 [2024-12-09 05:22:44.924813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:54.703 "name": "raid_bdev1", 01:27:54.703 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:54.703 "strip_size_kb": 0, 01:27:54.703 "state": "online", 01:27:54.703 "raid_level": "raid1", 01:27:54.703 "superblock": true, 01:27:54.703 "num_base_bdevs": 2, 01:27:54.703 "num_base_bdevs_discovered": 2, 01:27:54.703 "num_base_bdevs_operational": 2, 01:27:54.703 "process": { 01:27:54.703 "type": "rebuild", 01:27:54.703 "target": "spare", 01:27:54.703 "progress": { 01:27:54.703 "blocks": 20480, 01:27:54.703 "percent": 32 01:27:54.703 } 01:27:54.703 }, 01:27:54.703 "base_bdevs_list": [ 01:27:54.703 { 01:27:54.703 "name": "spare", 01:27:54.703 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:54.703 "is_configured": true, 01:27:54.703 "data_offset": 2048, 01:27:54.703 "data_size": 63488 01:27:54.703 }, 01:27:54.703 { 01:27:54.703 "name": "BaseBdev2", 01:27:54.703 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:54.703 "is_configured": true, 01:27:54.703 "data_offset": 2048, 01:27:54.703 "data_size": 63488 01:27:54.703 } 01:27:54.703 ] 01:27:54.703 }' 01:27:54.703 05:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:54.704 [2024-12-09 05:22:46.102990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:54.704 [2024-12-09 05:22:46.134794] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:54.704 [2024-12-09 05:22:46.134884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:54.704 [2024-12-09 05:22:46.134910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:54.704 [2024-12-09 05:22:46.134945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:54.704 "name": "raid_bdev1", 01:27:54.704 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:54.704 "strip_size_kb": 0, 01:27:54.704 "state": "online", 01:27:54.704 "raid_level": "raid1", 01:27:54.704 "superblock": true, 01:27:54.704 "num_base_bdevs": 2, 01:27:54.704 "num_base_bdevs_discovered": 1, 01:27:54.704 "num_base_bdevs_operational": 1, 01:27:54.704 "base_bdevs_list": [ 01:27:54.704 { 01:27:54.704 "name": null, 01:27:54.704 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:54.704 "is_configured": false, 01:27:54.704 "data_offset": 0, 01:27:54.704 "data_size": 63488 01:27:54.704 }, 01:27:54.704 { 01:27:54.704 "name": "BaseBdev2", 01:27:54.704 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:54.704 "is_configured": true, 01:27:54.704 "data_offset": 2048, 01:27:54.704 "data_size": 63488 01:27:54.704 } 01:27:54.704 ] 01:27:54.704 }' 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:54.704 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:55.273 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:27:55.273 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:55.273 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:55.273 [2024-12-09 05:22:46.689426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:27:55.273 [2024-12-09 05:22:46.689564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:55.273 [2024-12-09 05:22:46.689607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:27:55.273 [2024-12-09 05:22:46.689622] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:55.273 [2024-12-09 05:22:46.690389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:55.273 [2024-12-09 05:22:46.690431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:27:55.273 [2024-12-09 05:22:46.690567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:27:55.273 [2024-12-09 05:22:46.690587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:27:55.273 [2024-12-09 05:22:46.690603] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:27:55.273 [2024-12-09 05:22:46.690646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:27:55.273 [2024-12-09 05:22:46.706710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 01:27:55.273 spare 01:27:55.273 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:55.273 05:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 01:27:55.273 [2024-12-09 05:22:46.709595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:56.208 "name": "raid_bdev1", 01:27:56.208 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:56.208 "strip_size_kb": 0, 01:27:56.208 "state": "online", 01:27:56.208 "raid_level": "raid1", 01:27:56.208 "superblock": true, 01:27:56.208 "num_base_bdevs": 2, 01:27:56.208 "num_base_bdevs_discovered": 2, 01:27:56.208 "num_base_bdevs_operational": 2, 01:27:56.208 "process": { 01:27:56.208 "type": "rebuild", 01:27:56.208 "target": "spare", 01:27:56.208 "progress": { 01:27:56.208 "blocks": 20480, 01:27:56.208 "percent": 32 01:27:56.208 } 01:27:56.208 }, 01:27:56.208 "base_bdevs_list": [ 01:27:56.208 { 01:27:56.208 "name": "spare", 01:27:56.208 "uuid": "bccd64db-384e-5ddc-b722-5570f52e6787", 01:27:56.208 "is_configured": true, 01:27:56.208 "data_offset": 2048, 01:27:56.208 "data_size": 63488 01:27:56.208 }, 01:27:56.208 { 01:27:56.208 "name": "BaseBdev2", 01:27:56.208 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:56.208 "is_configured": true, 01:27:56.208 "data_offset": 2048, 01:27:56.208 "data_size": 63488 01:27:56.208 } 01:27:56.208 ] 01:27:56.208 }' 01:27:56.208 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:56.467 [2024-12-09 05:22:47.883612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:56.467 [2024-12-09 05:22:47.919470] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:27:56.467 [2024-12-09 05:22:47.919582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:27:56.467 [2024-12-09 05:22:47.919606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:27:56.467 [2024-12-09 05:22:47.919620] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:56.467 05:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:56.467 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:56.467 "name": "raid_bdev1", 01:27:56.467 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:56.467 "strip_size_kb": 0, 01:27:56.467 "state": "online", 01:27:56.467 "raid_level": "raid1", 01:27:56.467 "superblock": true, 01:27:56.467 "num_base_bdevs": 2, 01:27:56.467 "num_base_bdevs_discovered": 1, 01:27:56.467 "num_base_bdevs_operational": 1, 01:27:56.467 "base_bdevs_list": [ 01:27:56.467 { 01:27:56.467 "name": null, 01:27:56.467 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:56.467 "is_configured": false, 01:27:56.467 "data_offset": 0, 01:27:56.467 "data_size": 63488 01:27:56.467 }, 01:27:56.467 { 01:27:56.467 "name": "BaseBdev2", 01:27:56.467 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:56.467 "is_configured": true, 01:27:56.467 "data_offset": 2048, 01:27:56.467 "data_size": 63488 01:27:56.467 } 01:27:56.467 ] 01:27:56.467 }' 01:27:56.467 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:56.467 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:57.036 "name": "raid_bdev1", 01:27:57.036 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:57.036 "strip_size_kb": 0, 01:27:57.036 "state": "online", 01:27:57.036 "raid_level": "raid1", 01:27:57.036 "superblock": true, 01:27:57.036 "num_base_bdevs": 2, 01:27:57.036 "num_base_bdevs_discovered": 1, 01:27:57.036 "num_base_bdevs_operational": 1, 01:27:57.036 "base_bdevs_list": [ 01:27:57.036 { 01:27:57.036 "name": null, 01:27:57.036 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:57.036 "is_configured": false, 01:27:57.036 "data_offset": 0, 01:27:57.036 "data_size": 63488 01:27:57.036 }, 01:27:57.036 { 01:27:57.036 "name": "BaseBdev2", 01:27:57.036 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:57.036 "is_configured": true, 01:27:57.036 "data_offset": 2048, 01:27:57.036 "data_size": 63488 01:27:57.036 } 01:27:57.036 ] 01:27:57.036 }' 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:57.036 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:57.296 [2024-12-09 05:22:48.669301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:27:57.296 [2024-12-09 05:22:48.669404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:27:57.296 [2024-12-09 05:22:48.669440] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 01:27:57.296 [2024-12-09 05:22:48.669471] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:27:57.296 [2024-12-09 05:22:48.670099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:27:57.296 [2024-12-09 05:22:48.670163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:27:57.296 [2024-12-09 05:22:48.670318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:27:57.296 [2024-12-09 05:22:48.670347] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:27:57.296 [2024-12-09 05:22:48.670359] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:27:57.296 [2024-12-09 05:22:48.670375] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:27:57.296 BaseBdev1 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.296 05:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:58.232 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:27:58.232 "name": "raid_bdev1", 01:27:58.232 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:58.232 "strip_size_kb": 0, 01:27:58.232 "state": "online", 01:27:58.232 "raid_level": "raid1", 01:27:58.232 "superblock": true, 01:27:58.232 "num_base_bdevs": 2, 01:27:58.232 "num_base_bdevs_discovered": 1, 01:27:58.232 "num_base_bdevs_operational": 1, 01:27:58.232 "base_bdevs_list": [ 01:27:58.232 { 01:27:58.232 "name": null, 01:27:58.232 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:58.232 "is_configured": false, 01:27:58.232 "data_offset": 0, 01:27:58.232 "data_size": 63488 01:27:58.232 }, 01:27:58.232 { 01:27:58.232 "name": "BaseBdev2", 01:27:58.232 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:58.232 "is_configured": true, 01:27:58.233 "data_offset": 2048, 01:27:58.233 "data_size": 63488 01:27:58.233 } 01:27:58.233 ] 01:27:58.233 }' 01:27:58.233 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:27:58.233 05:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:27:58.799 "name": "raid_bdev1", 01:27:58.799 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:27:58.799 "strip_size_kb": 0, 01:27:58.799 "state": "online", 01:27:58.799 "raid_level": "raid1", 01:27:58.799 "superblock": true, 01:27:58.799 "num_base_bdevs": 2, 01:27:58.799 "num_base_bdevs_discovered": 1, 01:27:58.799 "num_base_bdevs_operational": 1, 01:27:58.799 "base_bdevs_list": [ 01:27:58.799 { 01:27:58.799 "name": null, 01:27:58.799 "uuid": "00000000-0000-0000-0000-000000000000", 01:27:58.799 "is_configured": false, 01:27:58.799 "data_offset": 0, 01:27:58.799 "data_size": 63488 01:27:58.799 }, 01:27:58.799 { 01:27:58.799 "name": "BaseBdev2", 01:27:58.799 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:27:58.799 "is_configured": true, 01:27:58.799 "data_offset": 2048, 01:27:58.799 "data_size": 63488 01:27:58.799 } 01:27:58.799 ] 01:27:58.799 }' 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:27:58.799 [2024-12-09 05:22:50.402239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:27:58.799 [2024-12-09 05:22:50.402548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:27:58.799 [2024-12-09 05:22:50.402566] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:27:58.799 request: 01:27:58.799 { 01:27:58.799 "base_bdev": "BaseBdev1", 01:27:58.799 "raid_bdev": "raid_bdev1", 01:27:58.799 "method": "bdev_raid_add_base_bdev", 01:27:58.799 "req_id": 1 01:27:58.799 } 01:27:58.799 Got JSON-RPC error response 01:27:58.799 response: 01:27:58.799 { 01:27:58.799 "code": -22, 01:27:58.799 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:27:58.799 } 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:58.799 05:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:00.172 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:00.172 "name": "raid_bdev1", 01:28:00.172 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:28:00.172 "strip_size_kb": 0, 01:28:00.172 "state": "online", 01:28:00.172 "raid_level": "raid1", 01:28:00.172 "superblock": true, 01:28:00.172 "num_base_bdevs": 2, 01:28:00.172 "num_base_bdevs_discovered": 1, 01:28:00.172 "num_base_bdevs_operational": 1, 01:28:00.173 "base_bdevs_list": [ 01:28:00.173 { 01:28:00.173 "name": null, 01:28:00.173 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:00.173 "is_configured": false, 01:28:00.173 "data_offset": 0, 01:28:00.173 "data_size": 63488 01:28:00.173 }, 01:28:00.173 { 01:28:00.173 "name": "BaseBdev2", 01:28:00.173 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:28:00.173 "is_configured": true, 01:28:00.173 "data_offset": 2048, 01:28:00.173 "data_size": 63488 01:28:00.173 } 01:28:00.173 ] 01:28:00.173 }' 01:28:00.173 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:00.173 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:28:00.431 05:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:00.431 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:00.431 "name": "raid_bdev1", 01:28:00.431 "uuid": "9e2d7140-e452-40ec-846c-2f0043fbb011", 01:28:00.431 "strip_size_kb": 0, 01:28:00.431 "state": "online", 01:28:00.431 "raid_level": "raid1", 01:28:00.431 "superblock": true, 01:28:00.431 "num_base_bdevs": 2, 01:28:00.431 "num_base_bdevs_discovered": 1, 01:28:00.431 "num_base_bdevs_operational": 1, 01:28:00.431 "base_bdevs_list": [ 01:28:00.431 { 01:28:00.431 "name": null, 01:28:00.431 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:00.431 "is_configured": false, 01:28:00.431 "data_offset": 0, 01:28:00.431 "data_size": 63488 01:28:00.431 }, 01:28:00.431 { 01:28:00.431 "name": "BaseBdev2", 01:28:00.431 "uuid": "91c5720f-e4ea-5360-bd58-3e9243394d70", 01:28:00.431 "is_configured": true, 01:28:00.431 "data_offset": 2048, 01:28:00.431 "data_size": 63488 01:28:00.431 } 01:28:00.431 ] 01:28:00.431 }' 01:28:00.431 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77009 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77009 ']' 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77009 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77009 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:00.689 killing process with pid 77009 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77009' 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77009 01:28:00.689 Received shutdown signal, test time was about 18.356409 seconds 01:28:00.689 01:28:00.689 Latency(us) 01:28:00.689 [2024-12-09T05:22:52.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:00.689 [2024-12-09T05:22:52.306Z] =================================================================================================================== 01:28:00.689 [2024-12-09T05:22:52.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:28:00.689 [2024-12-09 05:22:52.158302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:28:00.689 [2024-12-09 05:22:52.158542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:28:00.689 [2024-12-09 05:22:52.158628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:28:00.689 [2024-12-09 05:22:52.158645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:28:00.689 05:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77009 01:28:00.946 [2024-12-09 05:22:52.349696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:28:01.879 05:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 01:28:01.879 01:28:01.879 real 0m21.683s 01:28:01.879 user 0m29.368s 01:28:01.879 sys 0m2.121s 01:28:01.879 05:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:01.879 05:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:28:01.879 ************************************ 01:28:01.879 END TEST raid_rebuild_test_sb_io 01:28:01.879 ************************************ 01:28:02.137 05:22:53 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 01:28:02.137 05:22:53 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 01:28:02.137 05:22:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:28:02.137 05:22:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:02.137 05:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:28:02.137 ************************************ 01:28:02.137 START TEST raid_rebuild_test 01:28:02.137 ************************************ 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:02.137 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77709 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77709 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77709 ']' 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:02.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:02.138 05:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:02.138 [2024-12-09 05:22:53.653989] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:02.138 I/O size of 3145728 is greater than zero copy threshold (65536). 01:28:02.138 Zero copy mechanism will not be used. 01:28:02.138 [2024-12-09 05:22:53.654588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77709 ] 01:28:02.395 [2024-12-09 05:22:53.837888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:02.395 [2024-12-09 05:22:53.973476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:02.653 [2024-12-09 05:22:54.168181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:28:02.653 [2024-12-09 05:22:54.168260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.218 BaseBdev1_malloc 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.218 [2024-12-09 05:22:54.664595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:28:03.218 [2024-12-09 05:22:54.664703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:03.218 [2024-12-09 05:22:54.664748] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:28:03.218 [2024-12-09 05:22:54.664766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:03.218 [2024-12-09 05:22:54.667511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:03.218 [2024-12-09 05:22:54.667571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:28:03.218 BaseBdev1 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.218 BaseBdev2_malloc 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.218 [2024-12-09 05:22:54.711241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:28:03.218 [2024-12-09 05:22:54.711363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:03.218 [2024-12-09 05:22:54.711411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:28:03.218 [2024-12-09 05:22:54.711429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:03.218 [2024-12-09 05:22:54.714077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:03.218 [2024-12-09 05:22:54.714134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:28:03.218 BaseBdev2 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.218 BaseBdev3_malloc 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.218 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.219 [2024-12-09 05:22:54.775982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:28:03.219 [2024-12-09 05:22:54.776090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:03.219 [2024-12-09 05:22:54.776122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:28:03.219 [2024-12-09 05:22:54.776139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:03.219 [2024-12-09 05:22:54.778889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:03.219 [2024-12-09 05:22:54.778945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:28:03.219 BaseBdev3 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.219 BaseBdev4_malloc 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.219 [2024-12-09 05:22:54.822831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 01:28:03.219 [2024-12-09 05:22:54.822954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:03.219 [2024-12-09 05:22:54.822994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:28:03.219 [2024-12-09 05:22:54.823011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:03.219 [2024-12-09 05:22:54.825921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:03.219 [2024-12-09 05:22:54.825984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:28:03.219 BaseBdev4 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.219 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.477 spare_malloc 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.477 spare_delay 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.477 [2024-12-09 05:22:54.889107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:28:03.477 [2024-12-09 05:22:54.889203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:03.477 [2024-12-09 05:22:54.889232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:28:03.477 [2024-12-09 05:22:54.889250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:03.477 [2024-12-09 05:22:54.892088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:03.477 [2024-12-09 05:22:54.892137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:28:03.477 spare 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.477 [2024-12-09 05:22:54.901146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:28:03.477 [2024-12-09 05:22:54.903609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:28:03.477 [2024-12-09 05:22:54.903704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:28:03.477 [2024-12-09 05:22:54.903789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:28:03.477 [2024-12-09 05:22:54.903901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:28:03.477 [2024-12-09 05:22:54.903931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:28:03.477 [2024-12-09 05:22:54.904232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:28:03.477 [2024-12-09 05:22:54.904496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:28:03.477 [2024-12-09 05:22:54.904515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:28:03.477 [2024-12-09 05:22:54.904691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:03.477 "name": "raid_bdev1", 01:28:03.477 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:03.477 "strip_size_kb": 0, 01:28:03.477 "state": "online", 01:28:03.477 "raid_level": "raid1", 01:28:03.477 "superblock": false, 01:28:03.477 "num_base_bdevs": 4, 01:28:03.477 "num_base_bdevs_discovered": 4, 01:28:03.477 "num_base_bdevs_operational": 4, 01:28:03.477 "base_bdevs_list": [ 01:28:03.477 { 01:28:03.477 "name": "BaseBdev1", 01:28:03.477 "uuid": "8143bcdd-7f1e-54c3-9c63-212e86ca6bbc", 01:28:03.477 "is_configured": true, 01:28:03.477 "data_offset": 0, 01:28:03.477 "data_size": 65536 01:28:03.477 }, 01:28:03.477 { 01:28:03.477 "name": "BaseBdev2", 01:28:03.477 "uuid": "49b0a9d7-e373-5c4a-8b8d-0037f0670965", 01:28:03.477 "is_configured": true, 01:28:03.477 "data_offset": 0, 01:28:03.477 "data_size": 65536 01:28:03.477 }, 01:28:03.477 { 01:28:03.477 "name": "BaseBdev3", 01:28:03.477 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:03.477 "is_configured": true, 01:28:03.477 "data_offset": 0, 01:28:03.477 "data_size": 65536 01:28:03.477 }, 01:28:03.477 { 01:28:03.477 "name": "BaseBdev4", 01:28:03.477 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:03.477 "is_configured": true, 01:28:03.477 "data_offset": 0, 01:28:03.477 "data_size": 65536 01:28:03.477 } 01:28:03.477 ] 01:28:03.477 }' 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:03.477 05:22:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:04.042 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:28:04.042 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:04.043 [2024-12-09 05:22:55.497855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:28:04.043 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:28:04.301 [2024-12-09 05:22:55.913791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:28:04.559 /dev/nbd0 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:04.559 1+0 records in 01:28:04.559 1+0 records out 01:28:04.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411561 s, 10.0 MB/s 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 01:28:04.559 05:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 01:28:14.525 65536+0 records in 01:28:14.525 65536+0 records out 01:28:14.525 33554432 bytes (34 MB, 32 MiB) copied, 8.55294 s, 3.9 MB/s 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:28:14.525 [2024-12-09 05:23:04.844046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:14.525 [2024-12-09 05:23:04.884202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:28:14.525 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:14.526 "name": "raid_bdev1", 01:28:14.526 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:14.526 "strip_size_kb": 0, 01:28:14.526 "state": "online", 01:28:14.526 "raid_level": "raid1", 01:28:14.526 "superblock": false, 01:28:14.526 "num_base_bdevs": 4, 01:28:14.526 "num_base_bdevs_discovered": 3, 01:28:14.526 "num_base_bdevs_operational": 3, 01:28:14.526 "base_bdevs_list": [ 01:28:14.526 { 01:28:14.526 "name": null, 01:28:14.526 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:14.526 "is_configured": false, 01:28:14.526 "data_offset": 0, 01:28:14.526 "data_size": 65536 01:28:14.526 }, 01:28:14.526 { 01:28:14.526 "name": "BaseBdev2", 01:28:14.526 "uuid": "49b0a9d7-e373-5c4a-8b8d-0037f0670965", 01:28:14.526 "is_configured": true, 01:28:14.526 "data_offset": 0, 01:28:14.526 "data_size": 65536 01:28:14.526 }, 01:28:14.526 { 01:28:14.526 "name": "BaseBdev3", 01:28:14.526 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:14.526 "is_configured": true, 01:28:14.526 "data_offset": 0, 01:28:14.526 "data_size": 65536 01:28:14.526 }, 01:28:14.526 { 01:28:14.526 "name": "BaseBdev4", 01:28:14.526 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:14.526 "is_configured": true, 01:28:14.526 "data_offset": 0, 01:28:14.526 "data_size": 65536 01:28:14.526 } 01:28:14.526 ] 01:28:14.526 }' 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:14.526 05:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:14.526 05:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:14.526 05:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:14.526 05:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:14.526 [2024-12-09 05:23:05.380262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:14.526 [2024-12-09 05:23:05.392933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 01:28:14.526 05:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:14.526 05:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 01:28:14.526 [2024-12-09 05:23:05.395404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:15.092 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:15.092 "name": "raid_bdev1", 01:28:15.092 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:15.092 "strip_size_kb": 0, 01:28:15.092 "state": "online", 01:28:15.092 "raid_level": "raid1", 01:28:15.092 "superblock": false, 01:28:15.092 "num_base_bdevs": 4, 01:28:15.092 "num_base_bdevs_discovered": 4, 01:28:15.092 "num_base_bdevs_operational": 4, 01:28:15.092 "process": { 01:28:15.092 "type": "rebuild", 01:28:15.092 "target": "spare", 01:28:15.092 "progress": { 01:28:15.092 "blocks": 20480, 01:28:15.092 "percent": 31 01:28:15.092 } 01:28:15.092 }, 01:28:15.092 "base_bdevs_list": [ 01:28:15.092 { 01:28:15.092 "name": "spare", 01:28:15.092 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:15.092 "is_configured": true, 01:28:15.092 "data_offset": 0, 01:28:15.092 "data_size": 65536 01:28:15.092 }, 01:28:15.092 { 01:28:15.092 "name": "BaseBdev2", 01:28:15.092 "uuid": "49b0a9d7-e373-5c4a-8b8d-0037f0670965", 01:28:15.092 "is_configured": true, 01:28:15.092 "data_offset": 0, 01:28:15.092 "data_size": 65536 01:28:15.092 }, 01:28:15.092 { 01:28:15.092 "name": "BaseBdev3", 01:28:15.092 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:15.092 "is_configured": true, 01:28:15.092 "data_offset": 0, 01:28:15.092 "data_size": 65536 01:28:15.092 }, 01:28:15.092 { 01:28:15.093 "name": "BaseBdev4", 01:28:15.093 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:15.093 "is_configured": true, 01:28:15.093 "data_offset": 0, 01:28:15.093 "data_size": 65536 01:28:15.093 } 01:28:15.093 ] 01:28:15.093 }' 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:15.093 [2024-12-09 05:23:06.592834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:15.093 [2024-12-09 05:23:06.604784] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:28:15.093 [2024-12-09 05:23:06.604888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:15.093 [2024-12-09 05:23:06.604911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:15.093 [2024-12-09 05:23:06.604924] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:15.093 "name": "raid_bdev1", 01:28:15.093 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:15.093 "strip_size_kb": 0, 01:28:15.093 "state": "online", 01:28:15.093 "raid_level": "raid1", 01:28:15.093 "superblock": false, 01:28:15.093 "num_base_bdevs": 4, 01:28:15.093 "num_base_bdevs_discovered": 3, 01:28:15.093 "num_base_bdevs_operational": 3, 01:28:15.093 "base_bdevs_list": [ 01:28:15.093 { 01:28:15.093 "name": null, 01:28:15.093 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:15.093 "is_configured": false, 01:28:15.093 "data_offset": 0, 01:28:15.093 "data_size": 65536 01:28:15.093 }, 01:28:15.093 { 01:28:15.093 "name": "BaseBdev2", 01:28:15.093 "uuid": "49b0a9d7-e373-5c4a-8b8d-0037f0670965", 01:28:15.093 "is_configured": true, 01:28:15.093 "data_offset": 0, 01:28:15.093 "data_size": 65536 01:28:15.093 }, 01:28:15.093 { 01:28:15.093 "name": "BaseBdev3", 01:28:15.093 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:15.093 "is_configured": true, 01:28:15.093 "data_offset": 0, 01:28:15.093 "data_size": 65536 01:28:15.093 }, 01:28:15.093 { 01:28:15.093 "name": "BaseBdev4", 01:28:15.093 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:15.093 "is_configured": true, 01:28:15.093 "data_offset": 0, 01:28:15.093 "data_size": 65536 01:28:15.093 } 01:28:15.093 ] 01:28:15.093 }' 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:15.093 05:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:15.686 "name": "raid_bdev1", 01:28:15.686 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:15.686 "strip_size_kb": 0, 01:28:15.686 "state": "online", 01:28:15.686 "raid_level": "raid1", 01:28:15.686 "superblock": false, 01:28:15.686 "num_base_bdevs": 4, 01:28:15.686 "num_base_bdevs_discovered": 3, 01:28:15.686 "num_base_bdevs_operational": 3, 01:28:15.686 "base_bdevs_list": [ 01:28:15.686 { 01:28:15.686 "name": null, 01:28:15.686 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:15.686 "is_configured": false, 01:28:15.686 "data_offset": 0, 01:28:15.686 "data_size": 65536 01:28:15.686 }, 01:28:15.686 { 01:28:15.686 "name": "BaseBdev2", 01:28:15.686 "uuid": "49b0a9d7-e373-5c4a-8b8d-0037f0670965", 01:28:15.686 "is_configured": true, 01:28:15.686 "data_offset": 0, 01:28:15.686 "data_size": 65536 01:28:15.686 }, 01:28:15.686 { 01:28:15.686 "name": "BaseBdev3", 01:28:15.686 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:15.686 "is_configured": true, 01:28:15.686 "data_offset": 0, 01:28:15.686 "data_size": 65536 01:28:15.686 }, 01:28:15.686 { 01:28:15.686 "name": "BaseBdev4", 01:28:15.686 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:15.686 "is_configured": true, 01:28:15.686 "data_offset": 0, 01:28:15.686 "data_size": 65536 01:28:15.686 } 01:28:15.686 ] 01:28:15.686 }' 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:15.686 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:15.944 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:15.944 05:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:15.944 05:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:15.944 [2024-12-09 05:23:07.307276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:15.944 [2024-12-09 05:23:07.320047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 01:28:15.944 05:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:15.944 05:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 01:28:15.944 [2024-12-09 05:23:07.322567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:16.880 "name": "raid_bdev1", 01:28:16.880 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:16.880 "strip_size_kb": 0, 01:28:16.880 "state": "online", 01:28:16.880 "raid_level": "raid1", 01:28:16.880 "superblock": false, 01:28:16.880 "num_base_bdevs": 4, 01:28:16.880 "num_base_bdevs_discovered": 4, 01:28:16.880 "num_base_bdevs_operational": 4, 01:28:16.880 "process": { 01:28:16.880 "type": "rebuild", 01:28:16.880 "target": "spare", 01:28:16.880 "progress": { 01:28:16.880 "blocks": 20480, 01:28:16.880 "percent": 31 01:28:16.880 } 01:28:16.880 }, 01:28:16.880 "base_bdevs_list": [ 01:28:16.880 { 01:28:16.880 "name": "spare", 01:28:16.880 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:16.880 "is_configured": true, 01:28:16.880 "data_offset": 0, 01:28:16.880 "data_size": 65536 01:28:16.880 }, 01:28:16.880 { 01:28:16.880 "name": "BaseBdev2", 01:28:16.880 "uuid": "49b0a9d7-e373-5c4a-8b8d-0037f0670965", 01:28:16.880 "is_configured": true, 01:28:16.880 "data_offset": 0, 01:28:16.880 "data_size": 65536 01:28:16.880 }, 01:28:16.880 { 01:28:16.880 "name": "BaseBdev3", 01:28:16.880 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:16.880 "is_configured": true, 01:28:16.880 "data_offset": 0, 01:28:16.880 "data_size": 65536 01:28:16.880 }, 01:28:16.880 { 01:28:16.880 "name": "BaseBdev4", 01:28:16.880 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:16.880 "is_configured": true, 01:28:16.880 "data_offset": 0, 01:28:16.880 "data_size": 65536 01:28:16.880 } 01:28:16.880 ] 01:28:16.880 }' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:16.880 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:17.139 [2024-12-09 05:23:08.495955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:28:17.139 [2024-12-09 05:23:08.531932] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:17.139 "name": "raid_bdev1", 01:28:17.139 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:17.139 "strip_size_kb": 0, 01:28:17.139 "state": "online", 01:28:17.139 "raid_level": "raid1", 01:28:17.139 "superblock": false, 01:28:17.139 "num_base_bdevs": 4, 01:28:17.139 "num_base_bdevs_discovered": 3, 01:28:17.139 "num_base_bdevs_operational": 3, 01:28:17.139 "process": { 01:28:17.139 "type": "rebuild", 01:28:17.139 "target": "spare", 01:28:17.139 "progress": { 01:28:17.139 "blocks": 24576, 01:28:17.139 "percent": 37 01:28:17.139 } 01:28:17.139 }, 01:28:17.139 "base_bdevs_list": [ 01:28:17.139 { 01:28:17.139 "name": "spare", 01:28:17.139 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:17.139 "is_configured": true, 01:28:17.139 "data_offset": 0, 01:28:17.139 "data_size": 65536 01:28:17.139 }, 01:28:17.139 { 01:28:17.139 "name": null, 01:28:17.139 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:17.139 "is_configured": false, 01:28:17.139 "data_offset": 0, 01:28:17.139 "data_size": 65536 01:28:17.139 }, 01:28:17.139 { 01:28:17.139 "name": "BaseBdev3", 01:28:17.139 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:17.139 "is_configured": true, 01:28:17.139 "data_offset": 0, 01:28:17.139 "data_size": 65536 01:28:17.139 }, 01:28:17.139 { 01:28:17.139 "name": "BaseBdev4", 01:28:17.139 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:17.139 "is_configured": true, 01:28:17.139 "data_offset": 0, 01:28:17.139 "data_size": 65536 01:28:17.139 } 01:28:17.139 ] 01:28:17.139 }' 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=490 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:17.139 05:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:17.397 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:17.397 "name": "raid_bdev1", 01:28:17.397 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:17.397 "strip_size_kb": 0, 01:28:17.397 "state": "online", 01:28:17.397 "raid_level": "raid1", 01:28:17.397 "superblock": false, 01:28:17.397 "num_base_bdevs": 4, 01:28:17.397 "num_base_bdevs_discovered": 3, 01:28:17.397 "num_base_bdevs_operational": 3, 01:28:17.397 "process": { 01:28:17.397 "type": "rebuild", 01:28:17.397 "target": "spare", 01:28:17.397 "progress": { 01:28:17.397 "blocks": 26624, 01:28:17.397 "percent": 40 01:28:17.397 } 01:28:17.397 }, 01:28:17.397 "base_bdevs_list": [ 01:28:17.397 { 01:28:17.397 "name": "spare", 01:28:17.397 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:17.397 "is_configured": true, 01:28:17.397 "data_offset": 0, 01:28:17.397 "data_size": 65536 01:28:17.397 }, 01:28:17.397 { 01:28:17.397 "name": null, 01:28:17.397 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:17.397 "is_configured": false, 01:28:17.397 "data_offset": 0, 01:28:17.397 "data_size": 65536 01:28:17.397 }, 01:28:17.397 { 01:28:17.397 "name": "BaseBdev3", 01:28:17.397 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:17.397 "is_configured": true, 01:28:17.397 "data_offset": 0, 01:28:17.397 "data_size": 65536 01:28:17.397 }, 01:28:17.397 { 01:28:17.397 "name": "BaseBdev4", 01:28:17.397 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:17.397 "is_configured": true, 01:28:17.397 "data_offset": 0, 01:28:17.397 "data_size": 65536 01:28:17.397 } 01:28:17.397 ] 01:28:17.397 }' 01:28:17.397 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:17.397 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:17.397 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:17.397 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:17.397 05:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:18.330 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:18.330 "name": "raid_bdev1", 01:28:18.330 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:18.330 "strip_size_kb": 0, 01:28:18.330 "state": "online", 01:28:18.330 "raid_level": "raid1", 01:28:18.330 "superblock": false, 01:28:18.330 "num_base_bdevs": 4, 01:28:18.330 "num_base_bdevs_discovered": 3, 01:28:18.330 "num_base_bdevs_operational": 3, 01:28:18.330 "process": { 01:28:18.330 "type": "rebuild", 01:28:18.330 "target": "spare", 01:28:18.330 "progress": { 01:28:18.330 "blocks": 51200, 01:28:18.330 "percent": 78 01:28:18.330 } 01:28:18.330 }, 01:28:18.330 "base_bdevs_list": [ 01:28:18.330 { 01:28:18.330 "name": "spare", 01:28:18.330 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:18.330 "is_configured": true, 01:28:18.330 "data_offset": 0, 01:28:18.330 "data_size": 65536 01:28:18.330 }, 01:28:18.330 { 01:28:18.330 "name": null, 01:28:18.330 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:18.330 "is_configured": false, 01:28:18.330 "data_offset": 0, 01:28:18.330 "data_size": 65536 01:28:18.330 }, 01:28:18.330 { 01:28:18.330 "name": "BaseBdev3", 01:28:18.330 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:18.331 "is_configured": true, 01:28:18.331 "data_offset": 0, 01:28:18.331 "data_size": 65536 01:28:18.331 }, 01:28:18.331 { 01:28:18.331 "name": "BaseBdev4", 01:28:18.331 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:18.331 "is_configured": true, 01:28:18.331 "data_offset": 0, 01:28:18.331 "data_size": 65536 01:28:18.331 } 01:28:18.331 ] 01:28:18.331 }' 01:28:18.331 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:18.589 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:18.589 05:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:18.589 05:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:18.589 05:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:28:19.153 [2024-12-09 05:23:10.547532] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:28:19.153 [2024-12-09 05:23:10.547607] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:28:19.153 [2024-12-09 05:23:10.547701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:19.718 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:19.718 "name": "raid_bdev1", 01:28:19.718 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:19.719 "strip_size_kb": 0, 01:28:19.719 "state": "online", 01:28:19.719 "raid_level": "raid1", 01:28:19.719 "superblock": false, 01:28:19.719 "num_base_bdevs": 4, 01:28:19.719 "num_base_bdevs_discovered": 3, 01:28:19.719 "num_base_bdevs_operational": 3, 01:28:19.719 "base_bdevs_list": [ 01:28:19.719 { 01:28:19.719 "name": "spare", 01:28:19.719 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:19.719 "is_configured": true, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 }, 01:28:19.719 { 01:28:19.719 "name": null, 01:28:19.719 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:19.719 "is_configured": false, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 }, 01:28:19.719 { 01:28:19.719 "name": "BaseBdev3", 01:28:19.719 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:19.719 "is_configured": true, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 }, 01:28:19.719 { 01:28:19.719 "name": "BaseBdev4", 01:28:19.719 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:19.719 "is_configured": true, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 } 01:28:19.719 ] 01:28:19.719 }' 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:19.719 "name": "raid_bdev1", 01:28:19.719 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:19.719 "strip_size_kb": 0, 01:28:19.719 "state": "online", 01:28:19.719 "raid_level": "raid1", 01:28:19.719 "superblock": false, 01:28:19.719 "num_base_bdevs": 4, 01:28:19.719 "num_base_bdevs_discovered": 3, 01:28:19.719 "num_base_bdevs_operational": 3, 01:28:19.719 "base_bdevs_list": [ 01:28:19.719 { 01:28:19.719 "name": "spare", 01:28:19.719 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:19.719 "is_configured": true, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 }, 01:28:19.719 { 01:28:19.719 "name": null, 01:28:19.719 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:19.719 "is_configured": false, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 }, 01:28:19.719 { 01:28:19.719 "name": "BaseBdev3", 01:28:19.719 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:19.719 "is_configured": true, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 }, 01:28:19.719 { 01:28:19.719 "name": "BaseBdev4", 01:28:19.719 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:19.719 "is_configured": true, 01:28:19.719 "data_offset": 0, 01:28:19.719 "data_size": 65536 01:28:19.719 } 01:28:19.719 ] 01:28:19.719 }' 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:19.719 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:20.085 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:20.085 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:20.086 "name": "raid_bdev1", 01:28:20.086 "uuid": "767f67dd-2768-48f0-95b1-f2d9a0c5ae18", 01:28:20.086 "strip_size_kb": 0, 01:28:20.086 "state": "online", 01:28:20.086 "raid_level": "raid1", 01:28:20.086 "superblock": false, 01:28:20.086 "num_base_bdevs": 4, 01:28:20.086 "num_base_bdevs_discovered": 3, 01:28:20.086 "num_base_bdevs_operational": 3, 01:28:20.086 "base_bdevs_list": [ 01:28:20.086 { 01:28:20.086 "name": "spare", 01:28:20.086 "uuid": "a37431bb-ba66-595d-8924-1c587a6a737f", 01:28:20.086 "is_configured": true, 01:28:20.086 "data_offset": 0, 01:28:20.086 "data_size": 65536 01:28:20.086 }, 01:28:20.086 { 01:28:20.086 "name": null, 01:28:20.086 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:20.086 "is_configured": false, 01:28:20.086 "data_offset": 0, 01:28:20.086 "data_size": 65536 01:28:20.086 }, 01:28:20.086 { 01:28:20.086 "name": "BaseBdev3", 01:28:20.086 "uuid": "d60399a2-b9f2-5afa-b62f-98711bdb4f46", 01:28:20.086 "is_configured": true, 01:28:20.086 "data_offset": 0, 01:28:20.086 "data_size": 65536 01:28:20.086 }, 01:28:20.086 { 01:28:20.086 "name": "BaseBdev4", 01:28:20.086 "uuid": "99e88b76-8e9b-5f3c-b11f-f1315787b810", 01:28:20.086 "is_configured": true, 01:28:20.086 "data_offset": 0, 01:28:20.086 "data_size": 65536 01:28:20.086 } 01:28:20.086 ] 01:28:20.086 }' 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:20.086 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:20.354 [2024-12-09 05:23:11.901707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:28:20.354 [2024-12-09 05:23:11.901744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:28:20.354 [2024-12-09 05:23:11.901901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:28:20.354 [2024-12-09 05:23:11.902007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:28:20.354 [2024-12-09 05:23:11.902022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:20.354 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:28:20.355 05:23:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:28:20.919 /dev/nbd0 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:20.919 1+0 records in 01:28:20.919 1+0 records out 01:28:20.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283755 s, 14.4 MB/s 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:28:20.919 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:28:21.177 /dev/nbd1 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:21.177 1+0 records in 01:28:21.177 1+0 records out 01:28:21.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365246 s, 11.2 MB/s 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:28:21.177 05:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:21.435 05:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:21.693 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77709 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77709 ']' 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77709 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77709 01:28:21.952 killing process with pid 77709 01:28:21.952 Received shutdown signal, test time was about 60.000000 seconds 01:28:21.952 01:28:21.952 Latency(us) 01:28:21.952 [2024-12-09T05:23:13.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:21.952 [2024-12-09T05:23:13.569Z] =================================================================================================================== 01:28:21.952 [2024-12-09T05:23:13.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77709' 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77709 01:28:21.952 [2024-12-09 05:23:13.492433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:28:21.952 05:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77709 01:28:22.519 [2024-12-09 05:23:13.870768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:28:23.453 05:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 01:28:23.453 01:28:23.453 real 0m21.368s 01:28:23.453 user 0m24.324s 01:28:23.453 sys 0m3.712s 01:28:23.453 05:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:23.453 ************************************ 01:28:23.453 END TEST raid_rebuild_test 01:28:23.453 ************************************ 01:28:23.453 05:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:28:23.453 05:23:14 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 01:28:23.453 05:23:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:28:23.453 05:23:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:23.453 05:23:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:28:23.453 ************************************ 01:28:23.453 START TEST raid_rebuild_test_sb 01:28:23.453 ************************************ 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78194 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78194 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78194 ']' 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:23.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:23.454 05:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:23.711 [2024-12-09 05:23:15.099933] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:23.711 [2024-12-09 05:23:15.100431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78194 ] 01:28:23.711 I/O size of 3145728 is greater than zero copy threshold (65536). 01:28:23.711 Zero copy mechanism will not be used. 01:28:23.711 [2024-12-09 05:23:15.301494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:23.969 [2024-12-09 05:23:15.430500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:24.227 [2024-12-09 05:23:15.625319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:28:24.227 [2024-12-09 05:23:15.625588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.485 BaseBdev1_malloc 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.485 [2024-12-09 05:23:16.078176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:28:24.485 [2024-12-09 05:23:16.078473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:24.485 [2024-12-09 05:23:16.078547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:28:24.485 [2024-12-09 05:23:16.078838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:24.485 [2024-12-09 05:23:16.081982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:24.485 [2024-12-09 05:23:16.082193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:28:24.485 BaseBdev1 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.485 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 BaseBdev2_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 [2024-12-09 05:23:16.131721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:28:24.745 [2024-12-09 05:23:16.131818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:24.745 [2024-12-09 05:23:16.131850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:28:24.745 [2024-12-09 05:23:16.131867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:24.745 [2024-12-09 05:23:16.134612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:24.745 [2024-12-09 05:23:16.134902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:28:24.745 BaseBdev2 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 BaseBdev3_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 [2024-12-09 05:23:16.194005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:28:24.745 [2024-12-09 05:23:16.194082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:24.745 [2024-12-09 05:23:16.194112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:28:24.745 [2024-12-09 05:23:16.194128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:24.745 [2024-12-09 05:23:16.196987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:24.745 [2024-12-09 05:23:16.197032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:28:24.745 BaseBdev3 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 BaseBdev4_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 [2024-12-09 05:23:16.247837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 01:28:24.745 [2024-12-09 05:23:16.247915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:24.745 [2024-12-09 05:23:16.247946] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:28:24.745 [2024-12-09 05:23:16.247965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:24.745 [2024-12-09 05:23:16.250843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:24.745 [2024-12-09 05:23:16.251078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:28:24.745 BaseBdev4 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 spare_malloc 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 spare_delay 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 [2024-12-09 05:23:16.311010] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:28:24.745 [2024-12-09 05:23:16.311099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:24.745 [2024-12-09 05:23:16.311124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:28:24.745 [2024-12-09 05:23:16.311139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:24.745 [2024-12-09 05:23:16.314174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:24.745 [2024-12-09 05:23:16.314405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:28:24.745 spare 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 [2024-12-09 05:23:16.323168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:28:24.745 [2024-12-09 05:23:16.325750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:28:24.745 [2024-12-09 05:23:16.326010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:28:24.745 [2024-12-09 05:23:16.326145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:28:24.745 [2024-12-09 05:23:16.326498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:28:24.745 [2024-12-09 05:23:16.326541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:28:24.745 [2024-12-09 05:23:16.326888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:28:24.745 [2024-12-09 05:23:16.327114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:28:24.745 [2024-12-09 05:23:16.327129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:28:24.745 [2024-12-09 05:23:16.327360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:24.745 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:25.005 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:25.005 "name": "raid_bdev1", 01:28:25.005 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:25.005 "strip_size_kb": 0, 01:28:25.005 "state": "online", 01:28:25.005 "raid_level": "raid1", 01:28:25.005 "superblock": true, 01:28:25.005 "num_base_bdevs": 4, 01:28:25.005 "num_base_bdevs_discovered": 4, 01:28:25.005 "num_base_bdevs_operational": 4, 01:28:25.005 "base_bdevs_list": [ 01:28:25.005 { 01:28:25.005 "name": "BaseBdev1", 01:28:25.005 "uuid": "7512656b-d949-5b24-b8ea-9c1f76bd75a7", 01:28:25.005 "is_configured": true, 01:28:25.005 "data_offset": 2048, 01:28:25.005 "data_size": 63488 01:28:25.005 }, 01:28:25.005 { 01:28:25.005 "name": "BaseBdev2", 01:28:25.005 "uuid": "30794da6-74f0-5c1e-bf9f-f63639ac69e3", 01:28:25.005 "is_configured": true, 01:28:25.005 "data_offset": 2048, 01:28:25.005 "data_size": 63488 01:28:25.005 }, 01:28:25.005 { 01:28:25.005 "name": "BaseBdev3", 01:28:25.005 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:25.005 "is_configured": true, 01:28:25.005 "data_offset": 2048, 01:28:25.005 "data_size": 63488 01:28:25.005 }, 01:28:25.005 { 01:28:25.005 "name": "BaseBdev4", 01:28:25.005 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:25.005 "is_configured": true, 01:28:25.005 "data_offset": 2048, 01:28:25.005 "data_size": 63488 01:28:25.005 } 01:28:25.005 ] 01:28:25.005 }' 01:28:25.005 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:25.005 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:25.263 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:28:25.263 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:25.263 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:25.263 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:28:25.263 [2024-12-09 05:23:16.855960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:28:25.522 05:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:28:25.780 [2024-12-09 05:23:17.243730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:28:25.780 /dev/nbd0 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:25.780 1+0 records in 01:28:25.780 1+0 records out 01:28:25.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305639 s, 13.4 MB/s 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 01:28:25.780 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 01:28:25.781 05:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 01:28:35.759 63488+0 records in 01:28:35.759 63488+0 records out 01:28:35.759 32505856 bytes (33 MB, 31 MiB) copied, 8.3943 s, 3.9 MB/s 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:35.759 05:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:28:35.759 [2024-12-09 05:23:26.002458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:35.759 [2024-12-09 05:23:26.030612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:35.759 "name": "raid_bdev1", 01:28:35.759 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:35.759 "strip_size_kb": 0, 01:28:35.759 "state": "online", 01:28:35.759 "raid_level": "raid1", 01:28:35.759 "superblock": true, 01:28:35.759 "num_base_bdevs": 4, 01:28:35.759 "num_base_bdevs_discovered": 3, 01:28:35.759 "num_base_bdevs_operational": 3, 01:28:35.759 "base_bdevs_list": [ 01:28:35.759 { 01:28:35.759 "name": null, 01:28:35.759 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:35.759 "is_configured": false, 01:28:35.759 "data_offset": 0, 01:28:35.759 "data_size": 63488 01:28:35.759 }, 01:28:35.759 { 01:28:35.759 "name": "BaseBdev2", 01:28:35.759 "uuid": "30794da6-74f0-5c1e-bf9f-f63639ac69e3", 01:28:35.759 "is_configured": true, 01:28:35.759 "data_offset": 2048, 01:28:35.759 "data_size": 63488 01:28:35.759 }, 01:28:35.759 { 01:28:35.759 "name": "BaseBdev3", 01:28:35.759 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:35.759 "is_configured": true, 01:28:35.759 "data_offset": 2048, 01:28:35.759 "data_size": 63488 01:28:35.759 }, 01:28:35.759 { 01:28:35.759 "name": "BaseBdev4", 01:28:35.759 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:35.759 "is_configured": true, 01:28:35.759 "data_offset": 2048, 01:28:35.759 "data_size": 63488 01:28:35.759 } 01:28:35.759 ] 01:28:35.759 }' 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:35.759 [2024-12-09 05:23:26.558793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:35.759 [2024-12-09 05:23:26.574742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:35.759 05:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 01:28:35.759 [2024-12-09 05:23:26.577384] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:36.017 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:36.018 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:36.018 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:36.018 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:36.275 "name": "raid_bdev1", 01:28:36.275 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:36.275 "strip_size_kb": 0, 01:28:36.275 "state": "online", 01:28:36.275 "raid_level": "raid1", 01:28:36.275 "superblock": true, 01:28:36.275 "num_base_bdevs": 4, 01:28:36.275 "num_base_bdevs_discovered": 4, 01:28:36.275 "num_base_bdevs_operational": 4, 01:28:36.275 "process": { 01:28:36.275 "type": "rebuild", 01:28:36.275 "target": "spare", 01:28:36.275 "progress": { 01:28:36.275 "blocks": 20480, 01:28:36.275 "percent": 32 01:28:36.275 } 01:28:36.275 }, 01:28:36.275 "base_bdevs_list": [ 01:28:36.275 { 01:28:36.275 "name": "spare", 01:28:36.275 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 }, 01:28:36.275 { 01:28:36.275 "name": "BaseBdev2", 01:28:36.275 "uuid": "30794da6-74f0-5c1e-bf9f-f63639ac69e3", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 }, 01:28:36.275 { 01:28:36.275 "name": "BaseBdev3", 01:28:36.275 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 }, 01:28:36.275 { 01:28:36.275 "name": "BaseBdev4", 01:28:36.275 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 } 01:28:36.275 ] 01:28:36.275 }' 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:36.275 [2024-12-09 05:23:27.743111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:36.275 [2024-12-09 05:23:27.787907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:28:36.275 [2024-12-09 05:23:27.788255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:36.275 [2024-12-09 05:23:27.788509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:36.275 [2024-12-09 05:23:27.788541] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:36.275 "name": "raid_bdev1", 01:28:36.275 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:36.275 "strip_size_kb": 0, 01:28:36.275 "state": "online", 01:28:36.275 "raid_level": "raid1", 01:28:36.275 "superblock": true, 01:28:36.275 "num_base_bdevs": 4, 01:28:36.275 "num_base_bdevs_discovered": 3, 01:28:36.275 "num_base_bdevs_operational": 3, 01:28:36.275 "base_bdevs_list": [ 01:28:36.275 { 01:28:36.275 "name": null, 01:28:36.275 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:36.275 "is_configured": false, 01:28:36.275 "data_offset": 0, 01:28:36.275 "data_size": 63488 01:28:36.275 }, 01:28:36.275 { 01:28:36.275 "name": "BaseBdev2", 01:28:36.275 "uuid": "30794da6-74f0-5c1e-bf9f-f63639ac69e3", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 }, 01:28:36.275 { 01:28:36.275 "name": "BaseBdev3", 01:28:36.275 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 }, 01:28:36.275 { 01:28:36.275 "name": "BaseBdev4", 01:28:36.275 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:36.275 "is_configured": true, 01:28:36.275 "data_offset": 2048, 01:28:36.275 "data_size": 63488 01:28:36.275 } 01:28:36.275 ] 01:28:36.275 }' 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:36.275 05:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:36.840 "name": "raid_bdev1", 01:28:36.840 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:36.840 "strip_size_kb": 0, 01:28:36.840 "state": "online", 01:28:36.840 "raid_level": "raid1", 01:28:36.840 "superblock": true, 01:28:36.840 "num_base_bdevs": 4, 01:28:36.840 "num_base_bdevs_discovered": 3, 01:28:36.840 "num_base_bdevs_operational": 3, 01:28:36.840 "base_bdevs_list": [ 01:28:36.840 { 01:28:36.840 "name": null, 01:28:36.840 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:36.840 "is_configured": false, 01:28:36.840 "data_offset": 0, 01:28:36.840 "data_size": 63488 01:28:36.840 }, 01:28:36.840 { 01:28:36.840 "name": "BaseBdev2", 01:28:36.840 "uuid": "30794da6-74f0-5c1e-bf9f-f63639ac69e3", 01:28:36.840 "is_configured": true, 01:28:36.840 "data_offset": 2048, 01:28:36.840 "data_size": 63488 01:28:36.840 }, 01:28:36.840 { 01:28:36.840 "name": "BaseBdev3", 01:28:36.840 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:36.840 "is_configured": true, 01:28:36.840 "data_offset": 2048, 01:28:36.840 "data_size": 63488 01:28:36.840 }, 01:28:36.840 { 01:28:36.840 "name": "BaseBdev4", 01:28:36.840 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:36.840 "is_configured": true, 01:28:36.840 "data_offset": 2048, 01:28:36.840 "data_size": 63488 01:28:36.840 } 01:28:36.840 ] 01:28:36.840 }' 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:36.840 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:37.097 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:37.097 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:37.097 05:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:37.097 05:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:37.097 [2024-12-09 05:23:28.494460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:37.097 [2024-12-09 05:23:28.509214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 01:28:37.097 05:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:37.097 05:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 01:28:37.097 [2024-12-09 05:23:28.511861] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:38.031 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:38.031 "name": "raid_bdev1", 01:28:38.031 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:38.031 "strip_size_kb": 0, 01:28:38.031 "state": "online", 01:28:38.031 "raid_level": "raid1", 01:28:38.031 "superblock": true, 01:28:38.031 "num_base_bdevs": 4, 01:28:38.031 "num_base_bdevs_discovered": 4, 01:28:38.031 "num_base_bdevs_operational": 4, 01:28:38.031 "process": { 01:28:38.031 "type": "rebuild", 01:28:38.031 "target": "spare", 01:28:38.031 "progress": { 01:28:38.031 "blocks": 20480, 01:28:38.031 "percent": 32 01:28:38.031 } 01:28:38.031 }, 01:28:38.031 "base_bdevs_list": [ 01:28:38.031 { 01:28:38.031 "name": "spare", 01:28:38.031 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:38.031 "is_configured": true, 01:28:38.031 "data_offset": 2048, 01:28:38.031 "data_size": 63488 01:28:38.031 }, 01:28:38.031 { 01:28:38.031 "name": "BaseBdev2", 01:28:38.031 "uuid": "30794da6-74f0-5c1e-bf9f-f63639ac69e3", 01:28:38.031 "is_configured": true, 01:28:38.032 "data_offset": 2048, 01:28:38.032 "data_size": 63488 01:28:38.032 }, 01:28:38.032 { 01:28:38.032 "name": "BaseBdev3", 01:28:38.032 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:38.032 "is_configured": true, 01:28:38.032 "data_offset": 2048, 01:28:38.032 "data_size": 63488 01:28:38.032 }, 01:28:38.032 { 01:28:38.032 "name": "BaseBdev4", 01:28:38.032 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:38.032 "is_configured": true, 01:28:38.032 "data_offset": 2048, 01:28:38.032 "data_size": 63488 01:28:38.032 } 01:28:38.032 ] 01:28:38.032 }' 01:28:38.032 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:38.032 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:38.032 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:28:38.290 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:38.290 [2024-12-09 05:23:29.681413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:28:38.290 [2024-12-09 05:23:29.821490] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:38.290 "name": "raid_bdev1", 01:28:38.290 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:38.290 "strip_size_kb": 0, 01:28:38.290 "state": "online", 01:28:38.290 "raid_level": "raid1", 01:28:38.290 "superblock": true, 01:28:38.290 "num_base_bdevs": 4, 01:28:38.290 "num_base_bdevs_discovered": 3, 01:28:38.290 "num_base_bdevs_operational": 3, 01:28:38.290 "process": { 01:28:38.290 "type": "rebuild", 01:28:38.290 "target": "spare", 01:28:38.290 "progress": { 01:28:38.290 "blocks": 24576, 01:28:38.290 "percent": 38 01:28:38.290 } 01:28:38.290 }, 01:28:38.290 "base_bdevs_list": [ 01:28:38.290 { 01:28:38.290 "name": "spare", 01:28:38.290 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:38.290 "is_configured": true, 01:28:38.290 "data_offset": 2048, 01:28:38.290 "data_size": 63488 01:28:38.290 }, 01:28:38.290 { 01:28:38.290 "name": null, 01:28:38.290 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:38.290 "is_configured": false, 01:28:38.290 "data_offset": 0, 01:28:38.290 "data_size": 63488 01:28:38.290 }, 01:28:38.290 { 01:28:38.290 "name": "BaseBdev3", 01:28:38.290 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:38.290 "is_configured": true, 01:28:38.290 "data_offset": 2048, 01:28:38.290 "data_size": 63488 01:28:38.290 }, 01:28:38.290 { 01:28:38.290 "name": "BaseBdev4", 01:28:38.290 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:38.290 "is_configured": true, 01:28:38.290 "data_offset": 2048, 01:28:38.290 "data_size": 63488 01:28:38.290 } 01:28:38.290 ] 01:28:38.290 }' 01:28:38.290 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:38.548 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=511 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:38.549 05:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:38.549 05:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:38.549 "name": "raid_bdev1", 01:28:38.549 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:38.549 "strip_size_kb": 0, 01:28:38.549 "state": "online", 01:28:38.549 "raid_level": "raid1", 01:28:38.549 "superblock": true, 01:28:38.549 "num_base_bdevs": 4, 01:28:38.549 "num_base_bdevs_discovered": 3, 01:28:38.549 "num_base_bdevs_operational": 3, 01:28:38.549 "process": { 01:28:38.549 "type": "rebuild", 01:28:38.549 "target": "spare", 01:28:38.549 "progress": { 01:28:38.549 "blocks": 26624, 01:28:38.549 "percent": 41 01:28:38.549 } 01:28:38.549 }, 01:28:38.549 "base_bdevs_list": [ 01:28:38.549 { 01:28:38.549 "name": "spare", 01:28:38.549 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:38.549 "is_configured": true, 01:28:38.549 "data_offset": 2048, 01:28:38.549 "data_size": 63488 01:28:38.549 }, 01:28:38.549 { 01:28:38.549 "name": null, 01:28:38.549 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:38.549 "is_configured": false, 01:28:38.549 "data_offset": 0, 01:28:38.549 "data_size": 63488 01:28:38.549 }, 01:28:38.549 { 01:28:38.549 "name": "BaseBdev3", 01:28:38.549 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:38.549 "is_configured": true, 01:28:38.549 "data_offset": 2048, 01:28:38.549 "data_size": 63488 01:28:38.549 }, 01:28:38.549 { 01:28:38.549 "name": "BaseBdev4", 01:28:38.549 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:38.549 "is_configured": true, 01:28:38.549 "data_offset": 2048, 01:28:38.549 "data_size": 63488 01:28:38.549 } 01:28:38.549 ] 01:28:38.549 }' 01:28:38.549 05:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:38.549 05:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:38.549 05:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:38.549 05:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:38.549 05:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:39.966 "name": "raid_bdev1", 01:28:39.966 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:39.966 "strip_size_kb": 0, 01:28:39.966 "state": "online", 01:28:39.966 "raid_level": "raid1", 01:28:39.966 "superblock": true, 01:28:39.966 "num_base_bdevs": 4, 01:28:39.966 "num_base_bdevs_discovered": 3, 01:28:39.966 "num_base_bdevs_operational": 3, 01:28:39.966 "process": { 01:28:39.966 "type": "rebuild", 01:28:39.966 "target": "spare", 01:28:39.966 "progress": { 01:28:39.966 "blocks": 51200, 01:28:39.966 "percent": 80 01:28:39.966 } 01:28:39.966 }, 01:28:39.966 "base_bdevs_list": [ 01:28:39.966 { 01:28:39.966 "name": "spare", 01:28:39.966 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:39.966 "is_configured": true, 01:28:39.966 "data_offset": 2048, 01:28:39.966 "data_size": 63488 01:28:39.966 }, 01:28:39.966 { 01:28:39.966 "name": null, 01:28:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:39.966 "is_configured": false, 01:28:39.966 "data_offset": 0, 01:28:39.966 "data_size": 63488 01:28:39.966 }, 01:28:39.966 { 01:28:39.966 "name": "BaseBdev3", 01:28:39.966 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:39.966 "is_configured": true, 01:28:39.966 "data_offset": 2048, 01:28:39.966 "data_size": 63488 01:28:39.966 }, 01:28:39.966 { 01:28:39.966 "name": "BaseBdev4", 01:28:39.966 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:39.966 "is_configured": true, 01:28:39.966 "data_offset": 2048, 01:28:39.966 "data_size": 63488 01:28:39.966 } 01:28:39.966 ] 01:28:39.966 }' 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:39.966 05:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:28:40.224 [2024-12-09 05:23:31.736512] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:28:40.224 [2024-12-09 05:23:31.736612] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:28:40.224 [2024-12-09 05:23:31.736849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:40.792 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:40.792 "name": "raid_bdev1", 01:28:40.792 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:40.792 "strip_size_kb": 0, 01:28:40.792 "state": "online", 01:28:40.792 "raid_level": "raid1", 01:28:40.792 "superblock": true, 01:28:40.792 "num_base_bdevs": 4, 01:28:40.792 "num_base_bdevs_discovered": 3, 01:28:40.792 "num_base_bdevs_operational": 3, 01:28:40.792 "base_bdevs_list": [ 01:28:40.792 { 01:28:40.792 "name": "spare", 01:28:40.792 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:40.792 "is_configured": true, 01:28:40.793 "data_offset": 2048, 01:28:40.793 "data_size": 63488 01:28:40.793 }, 01:28:40.793 { 01:28:40.793 "name": null, 01:28:40.793 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:40.793 "is_configured": false, 01:28:40.793 "data_offset": 0, 01:28:40.793 "data_size": 63488 01:28:40.793 }, 01:28:40.793 { 01:28:40.793 "name": "BaseBdev3", 01:28:40.793 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:40.793 "is_configured": true, 01:28:40.793 "data_offset": 2048, 01:28:40.793 "data_size": 63488 01:28:40.793 }, 01:28:40.793 { 01:28:40.793 "name": "BaseBdev4", 01:28:40.793 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:40.793 "is_configured": true, 01:28:40.793 "data_offset": 2048, 01:28:40.793 "data_size": 63488 01:28:40.793 } 01:28:40.793 ] 01:28:40.793 }' 01:28:40.793 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:40.793 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:41.052 "name": "raid_bdev1", 01:28:41.052 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:41.052 "strip_size_kb": 0, 01:28:41.052 "state": "online", 01:28:41.052 "raid_level": "raid1", 01:28:41.052 "superblock": true, 01:28:41.052 "num_base_bdevs": 4, 01:28:41.052 "num_base_bdevs_discovered": 3, 01:28:41.052 "num_base_bdevs_operational": 3, 01:28:41.052 "base_bdevs_list": [ 01:28:41.052 { 01:28:41.052 "name": "spare", 01:28:41.052 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:41.052 "is_configured": true, 01:28:41.052 "data_offset": 2048, 01:28:41.052 "data_size": 63488 01:28:41.052 }, 01:28:41.052 { 01:28:41.052 "name": null, 01:28:41.052 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:41.052 "is_configured": false, 01:28:41.052 "data_offset": 0, 01:28:41.052 "data_size": 63488 01:28:41.052 }, 01:28:41.052 { 01:28:41.052 "name": "BaseBdev3", 01:28:41.052 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:41.052 "is_configured": true, 01:28:41.052 "data_offset": 2048, 01:28:41.052 "data_size": 63488 01:28:41.052 }, 01:28:41.052 { 01:28:41.052 "name": "BaseBdev4", 01:28:41.052 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:41.052 "is_configured": true, 01:28:41.052 "data_offset": 2048, 01:28:41.052 "data_size": 63488 01:28:41.052 } 01:28:41.052 ] 01:28:41.052 }' 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:41.052 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:41.053 "name": "raid_bdev1", 01:28:41.053 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:41.053 "strip_size_kb": 0, 01:28:41.053 "state": "online", 01:28:41.053 "raid_level": "raid1", 01:28:41.053 "superblock": true, 01:28:41.053 "num_base_bdevs": 4, 01:28:41.053 "num_base_bdevs_discovered": 3, 01:28:41.053 "num_base_bdevs_operational": 3, 01:28:41.053 "base_bdevs_list": [ 01:28:41.053 { 01:28:41.053 "name": "spare", 01:28:41.053 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:41.053 "is_configured": true, 01:28:41.053 "data_offset": 2048, 01:28:41.053 "data_size": 63488 01:28:41.053 }, 01:28:41.053 { 01:28:41.053 "name": null, 01:28:41.053 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:41.053 "is_configured": false, 01:28:41.053 "data_offset": 0, 01:28:41.053 "data_size": 63488 01:28:41.053 }, 01:28:41.053 { 01:28:41.053 "name": "BaseBdev3", 01:28:41.053 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:41.053 "is_configured": true, 01:28:41.053 "data_offset": 2048, 01:28:41.053 "data_size": 63488 01:28:41.053 }, 01:28:41.053 { 01:28:41.053 "name": "BaseBdev4", 01:28:41.053 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:41.053 "is_configured": true, 01:28:41.053 "data_offset": 2048, 01:28:41.053 "data_size": 63488 01:28:41.053 } 01:28:41.053 ] 01:28:41.053 }' 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:41.053 05:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:41.619 [2024-12-09 05:23:33.133759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:28:41.619 [2024-12-09 05:23:33.133804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:28:41.619 [2024-12-09 05:23:33.133981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:28:41.619 [2024-12-09 05:23:33.134121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:28:41.619 [2024-12-09 05:23:33.134145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:28:41.619 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:28:41.877 /dev/nbd0 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:42.135 1+0 records in 01:28:42.135 1+0 records out 01:28:42.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341953 s, 12.0 MB/s 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:28:42.135 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:28:42.393 /dev/nbd1 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:28:42.393 1+0 records in 01:28:42.393 1+0 records out 01:28:42.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332205 s, 12.3 MB/s 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:28:42.393 05:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:42.651 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:28:42.910 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.170 [2024-12-09 05:23:34.654377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:28:43.170 [2024-12-09 05:23:34.654495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:43.170 [2024-12-09 05:23:34.654531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 01:28:43.170 [2024-12-09 05:23:34.654547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:43.170 [2024-12-09 05:23:34.657543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:43.170 [2024-12-09 05:23:34.657593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:28:43.170 [2024-12-09 05:23:34.657704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:28:43.170 [2024-12-09 05:23:34.657797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:43.170 [2024-12-09 05:23:34.658009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:28:43.170 [2024-12-09 05:23:34.658150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:28:43.170 spare 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.170 [2024-12-09 05:23:34.758287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:28:43.170 [2024-12-09 05:23:34.758316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:28:43.170 [2024-12-09 05:23:34.758721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 01:28:43.170 [2024-12-09 05:23:34.758957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:28:43.170 [2024-12-09 05:23:34.758987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:28:43.170 [2024-12-09 05:23:34.759194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:43.170 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.429 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:43.429 "name": "raid_bdev1", 01:28:43.429 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:43.429 "strip_size_kb": 0, 01:28:43.429 "state": "online", 01:28:43.429 "raid_level": "raid1", 01:28:43.429 "superblock": true, 01:28:43.429 "num_base_bdevs": 4, 01:28:43.429 "num_base_bdevs_discovered": 3, 01:28:43.429 "num_base_bdevs_operational": 3, 01:28:43.429 "base_bdevs_list": [ 01:28:43.429 { 01:28:43.429 "name": "spare", 01:28:43.429 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:43.429 "is_configured": true, 01:28:43.429 "data_offset": 2048, 01:28:43.429 "data_size": 63488 01:28:43.429 }, 01:28:43.429 { 01:28:43.429 "name": null, 01:28:43.429 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:43.429 "is_configured": false, 01:28:43.429 "data_offset": 2048, 01:28:43.429 "data_size": 63488 01:28:43.429 }, 01:28:43.429 { 01:28:43.429 "name": "BaseBdev3", 01:28:43.429 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:43.429 "is_configured": true, 01:28:43.429 "data_offset": 2048, 01:28:43.429 "data_size": 63488 01:28:43.429 }, 01:28:43.429 { 01:28:43.429 "name": "BaseBdev4", 01:28:43.429 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:43.429 "is_configured": true, 01:28:43.429 "data_offset": 2048, 01:28:43.429 "data_size": 63488 01:28:43.429 } 01:28:43.429 ] 01:28:43.429 }' 01:28:43.429 05:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:43.429 05:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.687 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:43.944 "name": "raid_bdev1", 01:28:43.944 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:43.944 "strip_size_kb": 0, 01:28:43.944 "state": "online", 01:28:43.944 "raid_level": "raid1", 01:28:43.944 "superblock": true, 01:28:43.944 "num_base_bdevs": 4, 01:28:43.944 "num_base_bdevs_discovered": 3, 01:28:43.944 "num_base_bdevs_operational": 3, 01:28:43.944 "base_bdevs_list": [ 01:28:43.944 { 01:28:43.944 "name": "spare", 01:28:43.944 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:43.944 "is_configured": true, 01:28:43.944 "data_offset": 2048, 01:28:43.944 "data_size": 63488 01:28:43.944 }, 01:28:43.944 { 01:28:43.944 "name": null, 01:28:43.944 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:43.944 "is_configured": false, 01:28:43.944 "data_offset": 2048, 01:28:43.944 "data_size": 63488 01:28:43.944 }, 01:28:43.944 { 01:28:43.944 "name": "BaseBdev3", 01:28:43.944 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:43.944 "is_configured": true, 01:28:43.944 "data_offset": 2048, 01:28:43.944 "data_size": 63488 01:28:43.944 }, 01:28:43.944 { 01:28:43.944 "name": "BaseBdev4", 01:28:43.944 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:43.944 "is_configured": true, 01:28:43.944 "data_offset": 2048, 01:28:43.944 "data_size": 63488 01:28:43.944 } 01:28:43.944 ] 01:28:43.944 }' 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.944 [2024-12-09 05:23:35.503356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:43.944 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:44.202 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:44.202 "name": "raid_bdev1", 01:28:44.202 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:44.202 "strip_size_kb": 0, 01:28:44.202 "state": "online", 01:28:44.202 "raid_level": "raid1", 01:28:44.202 "superblock": true, 01:28:44.202 "num_base_bdevs": 4, 01:28:44.202 "num_base_bdevs_discovered": 2, 01:28:44.202 "num_base_bdevs_operational": 2, 01:28:44.202 "base_bdevs_list": [ 01:28:44.202 { 01:28:44.202 "name": null, 01:28:44.202 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:44.202 "is_configured": false, 01:28:44.202 "data_offset": 0, 01:28:44.202 "data_size": 63488 01:28:44.202 }, 01:28:44.202 { 01:28:44.202 "name": null, 01:28:44.202 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:44.202 "is_configured": false, 01:28:44.202 "data_offset": 2048, 01:28:44.202 "data_size": 63488 01:28:44.202 }, 01:28:44.202 { 01:28:44.202 "name": "BaseBdev3", 01:28:44.202 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:44.202 "is_configured": true, 01:28:44.202 "data_offset": 2048, 01:28:44.202 "data_size": 63488 01:28:44.202 }, 01:28:44.202 { 01:28:44.202 "name": "BaseBdev4", 01:28:44.202 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:44.202 "is_configured": true, 01:28:44.202 "data_offset": 2048, 01:28:44.202 "data_size": 63488 01:28:44.202 } 01:28:44.202 ] 01:28:44.202 }' 01:28:44.202 05:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:44.202 05:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:44.460 05:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:44.460 05:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:44.460 05:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:44.460 [2024-12-09 05:23:36.039609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:44.460 [2024-12-09 05:23:36.039998] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 01:28:44.460 [2024-12-09 05:23:36.040030] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:28:44.460 [2024-12-09 05:23:36.040099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:44.460 [2024-12-09 05:23:36.053940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 01:28:44.460 05:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:44.460 05:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 01:28:44.460 [2024-12-09 05:23:36.056690] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:45.835 "name": "raid_bdev1", 01:28:45.835 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:45.835 "strip_size_kb": 0, 01:28:45.835 "state": "online", 01:28:45.835 "raid_level": "raid1", 01:28:45.835 "superblock": true, 01:28:45.835 "num_base_bdevs": 4, 01:28:45.835 "num_base_bdevs_discovered": 3, 01:28:45.835 "num_base_bdevs_operational": 3, 01:28:45.835 "process": { 01:28:45.835 "type": "rebuild", 01:28:45.835 "target": "spare", 01:28:45.835 "progress": { 01:28:45.835 "blocks": 20480, 01:28:45.835 "percent": 32 01:28:45.835 } 01:28:45.835 }, 01:28:45.835 "base_bdevs_list": [ 01:28:45.835 { 01:28:45.835 "name": "spare", 01:28:45.835 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:45.835 "is_configured": true, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 }, 01:28:45.835 { 01:28:45.835 "name": null, 01:28:45.835 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:45.835 "is_configured": false, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 }, 01:28:45.835 { 01:28:45.835 "name": "BaseBdev3", 01:28:45.835 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:45.835 "is_configured": true, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 }, 01:28:45.835 { 01:28:45.835 "name": "BaseBdev4", 01:28:45.835 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:45.835 "is_configured": true, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 } 01:28:45.835 ] 01:28:45.835 }' 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:45.835 [2024-12-09 05:23:37.222407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:45.835 [2024-12-09 05:23:37.266335] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:28:45.835 [2024-12-09 05:23:37.266466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:45.835 [2024-12-09 05:23:37.266525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:45.835 [2024-12-09 05:23:37.266537] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:45.835 "name": "raid_bdev1", 01:28:45.835 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:45.835 "strip_size_kb": 0, 01:28:45.835 "state": "online", 01:28:45.835 "raid_level": "raid1", 01:28:45.835 "superblock": true, 01:28:45.835 "num_base_bdevs": 4, 01:28:45.835 "num_base_bdevs_discovered": 2, 01:28:45.835 "num_base_bdevs_operational": 2, 01:28:45.835 "base_bdevs_list": [ 01:28:45.835 { 01:28:45.835 "name": null, 01:28:45.835 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:45.835 "is_configured": false, 01:28:45.835 "data_offset": 0, 01:28:45.835 "data_size": 63488 01:28:45.835 }, 01:28:45.835 { 01:28:45.835 "name": null, 01:28:45.835 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:45.835 "is_configured": false, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 }, 01:28:45.835 { 01:28:45.835 "name": "BaseBdev3", 01:28:45.835 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:45.835 "is_configured": true, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 }, 01:28:45.835 { 01:28:45.835 "name": "BaseBdev4", 01:28:45.835 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:45.835 "is_configured": true, 01:28:45.835 "data_offset": 2048, 01:28:45.835 "data_size": 63488 01:28:45.835 } 01:28:45.835 ] 01:28:45.835 }' 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:45.835 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:46.406 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:28:46.406 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:46.406 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:46.406 [2024-12-09 05:23:37.832253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:28:46.406 [2024-12-09 05:23:37.832503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:46.406 [2024-12-09 05:23:37.832559] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 01:28:46.406 [2024-12-09 05:23:37.832577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:46.406 [2024-12-09 05:23:37.833215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:46.406 [2024-12-09 05:23:37.833244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:28:46.406 [2024-12-09 05:23:37.833427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:28:46.406 [2024-12-09 05:23:37.833448] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 01:28:46.406 [2024-12-09 05:23:37.833471] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:28:46.406 [2024-12-09 05:23:37.833500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:46.406 [2024-12-09 05:23:37.846098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 01:28:46.406 spare 01:28:46.406 05:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:46.406 05:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 01:28:46.406 [2024-12-09 05:23:37.848618] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:47.340 "name": "raid_bdev1", 01:28:47.340 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:47.340 "strip_size_kb": 0, 01:28:47.340 "state": "online", 01:28:47.340 "raid_level": "raid1", 01:28:47.340 "superblock": true, 01:28:47.340 "num_base_bdevs": 4, 01:28:47.340 "num_base_bdevs_discovered": 3, 01:28:47.340 "num_base_bdevs_operational": 3, 01:28:47.340 "process": { 01:28:47.340 "type": "rebuild", 01:28:47.340 "target": "spare", 01:28:47.340 "progress": { 01:28:47.340 "blocks": 20480, 01:28:47.340 "percent": 32 01:28:47.340 } 01:28:47.340 }, 01:28:47.340 "base_bdevs_list": [ 01:28:47.340 { 01:28:47.340 "name": "spare", 01:28:47.340 "uuid": "da38be9e-9621-5740-879f-f28a10856d3a", 01:28:47.340 "is_configured": true, 01:28:47.340 "data_offset": 2048, 01:28:47.340 "data_size": 63488 01:28:47.340 }, 01:28:47.340 { 01:28:47.340 "name": null, 01:28:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:47.340 "is_configured": false, 01:28:47.340 "data_offset": 2048, 01:28:47.340 "data_size": 63488 01:28:47.340 }, 01:28:47.340 { 01:28:47.340 "name": "BaseBdev3", 01:28:47.340 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:47.340 "is_configured": true, 01:28:47.340 "data_offset": 2048, 01:28:47.340 "data_size": 63488 01:28:47.340 }, 01:28:47.340 { 01:28:47.340 "name": "BaseBdev4", 01:28:47.340 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:47.340 "is_configured": true, 01:28:47.340 "data_offset": 2048, 01:28:47.340 "data_size": 63488 01:28:47.340 } 01:28:47.340 ] 01:28:47.340 }' 01:28:47.340 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:47.598 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:47.598 05:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:47.598 [2024-12-09 05:23:39.021768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:47.598 [2024-12-09 05:23:39.056847] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:28:47.598 [2024-12-09 05:23:39.056934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:47.598 [2024-12-09 05:23:39.056960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:47.598 [2024-12-09 05:23:39.056975] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:47.598 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:47.598 "name": "raid_bdev1", 01:28:47.598 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:47.598 "strip_size_kb": 0, 01:28:47.598 "state": "online", 01:28:47.598 "raid_level": "raid1", 01:28:47.598 "superblock": true, 01:28:47.598 "num_base_bdevs": 4, 01:28:47.598 "num_base_bdevs_discovered": 2, 01:28:47.598 "num_base_bdevs_operational": 2, 01:28:47.598 "base_bdevs_list": [ 01:28:47.598 { 01:28:47.598 "name": null, 01:28:47.598 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:47.598 "is_configured": false, 01:28:47.598 "data_offset": 0, 01:28:47.598 "data_size": 63488 01:28:47.598 }, 01:28:47.598 { 01:28:47.598 "name": null, 01:28:47.598 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:47.598 "is_configured": false, 01:28:47.598 "data_offset": 2048, 01:28:47.598 "data_size": 63488 01:28:47.598 }, 01:28:47.598 { 01:28:47.598 "name": "BaseBdev3", 01:28:47.598 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:47.598 "is_configured": true, 01:28:47.598 "data_offset": 2048, 01:28:47.598 "data_size": 63488 01:28:47.598 }, 01:28:47.598 { 01:28:47.598 "name": "BaseBdev4", 01:28:47.599 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:47.599 "is_configured": true, 01:28:47.599 "data_offset": 2048, 01:28:47.599 "data_size": 63488 01:28:47.599 } 01:28:47.599 ] 01:28:47.599 }' 01:28:47.599 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:47.599 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:48.164 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:48.164 "name": "raid_bdev1", 01:28:48.164 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:48.164 "strip_size_kb": 0, 01:28:48.164 "state": "online", 01:28:48.164 "raid_level": "raid1", 01:28:48.164 "superblock": true, 01:28:48.164 "num_base_bdevs": 4, 01:28:48.164 "num_base_bdevs_discovered": 2, 01:28:48.164 "num_base_bdevs_operational": 2, 01:28:48.164 "base_bdevs_list": [ 01:28:48.164 { 01:28:48.164 "name": null, 01:28:48.164 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:48.164 "is_configured": false, 01:28:48.164 "data_offset": 0, 01:28:48.164 "data_size": 63488 01:28:48.164 }, 01:28:48.164 { 01:28:48.164 "name": null, 01:28:48.164 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:48.164 "is_configured": false, 01:28:48.164 "data_offset": 2048, 01:28:48.164 "data_size": 63488 01:28:48.164 }, 01:28:48.164 { 01:28:48.164 "name": "BaseBdev3", 01:28:48.164 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:48.164 "is_configured": true, 01:28:48.164 "data_offset": 2048, 01:28:48.164 "data_size": 63488 01:28:48.164 }, 01:28:48.164 { 01:28:48.164 "name": "BaseBdev4", 01:28:48.164 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:48.164 "is_configured": true, 01:28:48.164 "data_offset": 2048, 01:28:48.164 "data_size": 63488 01:28:48.164 } 01:28:48.165 ] 01:28:48.165 }' 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:48.165 [2024-12-09 05:23:39.736181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:28:48.165 [2024-12-09 05:23:39.736263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:48.165 [2024-12-09 05:23:39.736291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 01:28:48.165 [2024-12-09 05:23:39.736309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:48.165 [2024-12-09 05:23:39.736913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:48.165 [2024-12-09 05:23:39.736950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:28:48.165 [2024-12-09 05:23:39.737041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:28:48.165 [2024-12-09 05:23:39.737066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 01:28:48.165 [2024-12-09 05:23:39.737092] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:28:48.165 [2024-12-09 05:23:39.737153] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:28:48.165 BaseBdev1 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:48.165 05:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:49.539 "name": "raid_bdev1", 01:28:49.539 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:49.539 "strip_size_kb": 0, 01:28:49.539 "state": "online", 01:28:49.539 "raid_level": "raid1", 01:28:49.539 "superblock": true, 01:28:49.539 "num_base_bdevs": 4, 01:28:49.539 "num_base_bdevs_discovered": 2, 01:28:49.539 "num_base_bdevs_operational": 2, 01:28:49.539 "base_bdevs_list": [ 01:28:49.539 { 01:28:49.539 "name": null, 01:28:49.539 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:49.539 "is_configured": false, 01:28:49.539 "data_offset": 0, 01:28:49.539 "data_size": 63488 01:28:49.539 }, 01:28:49.539 { 01:28:49.539 "name": null, 01:28:49.539 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:49.539 "is_configured": false, 01:28:49.539 "data_offset": 2048, 01:28:49.539 "data_size": 63488 01:28:49.539 }, 01:28:49.539 { 01:28:49.539 "name": "BaseBdev3", 01:28:49.539 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:49.539 "is_configured": true, 01:28:49.539 "data_offset": 2048, 01:28:49.539 "data_size": 63488 01:28:49.539 }, 01:28:49.539 { 01:28:49.539 "name": "BaseBdev4", 01:28:49.539 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:49.539 "is_configured": true, 01:28:49.539 "data_offset": 2048, 01:28:49.539 "data_size": 63488 01:28:49.539 } 01:28:49.539 ] 01:28:49.539 }' 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:49.539 05:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:49.797 "name": "raid_bdev1", 01:28:49.797 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:49.797 "strip_size_kb": 0, 01:28:49.797 "state": "online", 01:28:49.797 "raid_level": "raid1", 01:28:49.797 "superblock": true, 01:28:49.797 "num_base_bdevs": 4, 01:28:49.797 "num_base_bdevs_discovered": 2, 01:28:49.797 "num_base_bdevs_operational": 2, 01:28:49.797 "base_bdevs_list": [ 01:28:49.797 { 01:28:49.797 "name": null, 01:28:49.797 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:49.797 "is_configured": false, 01:28:49.797 "data_offset": 0, 01:28:49.797 "data_size": 63488 01:28:49.797 }, 01:28:49.797 { 01:28:49.797 "name": null, 01:28:49.797 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:49.797 "is_configured": false, 01:28:49.797 "data_offset": 2048, 01:28:49.797 "data_size": 63488 01:28:49.797 }, 01:28:49.797 { 01:28:49.797 "name": "BaseBdev3", 01:28:49.797 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:49.797 "is_configured": true, 01:28:49.797 "data_offset": 2048, 01:28:49.797 "data_size": 63488 01:28:49.797 }, 01:28:49.797 { 01:28:49.797 "name": "BaseBdev4", 01:28:49.797 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:49.797 "is_configured": true, 01:28:49.797 "data_offset": 2048, 01:28:49.797 "data_size": 63488 01:28:49.797 } 01:28:49.797 ] 01:28:49.797 }' 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:49.797 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:50.056 [2024-12-09 05:23:41.456875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:28:50.056 [2024-12-09 05:23:41.457193] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 01:28:50.056 [2024-12-09 05:23:41.457213] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:28:50.056 request: 01:28:50.056 { 01:28:50.056 "base_bdev": "BaseBdev1", 01:28:50.056 "raid_bdev": "raid_bdev1", 01:28:50.056 "method": "bdev_raid_add_base_bdev", 01:28:50.056 "req_id": 1 01:28:50.056 } 01:28:50.056 Got JSON-RPC error response 01:28:50.056 response: 01:28:50.056 { 01:28:50.056 "code": -22, 01:28:50.056 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:28:50.056 } 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:28:50.056 05:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:50.992 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:50.993 "name": "raid_bdev1", 01:28:50.993 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:50.993 "strip_size_kb": 0, 01:28:50.993 "state": "online", 01:28:50.993 "raid_level": "raid1", 01:28:50.993 "superblock": true, 01:28:50.993 "num_base_bdevs": 4, 01:28:50.993 "num_base_bdevs_discovered": 2, 01:28:50.993 "num_base_bdevs_operational": 2, 01:28:50.993 "base_bdevs_list": [ 01:28:50.993 { 01:28:50.993 "name": null, 01:28:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:50.993 "is_configured": false, 01:28:50.993 "data_offset": 0, 01:28:50.993 "data_size": 63488 01:28:50.993 }, 01:28:50.993 { 01:28:50.993 "name": null, 01:28:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:50.993 "is_configured": false, 01:28:50.993 "data_offset": 2048, 01:28:50.993 "data_size": 63488 01:28:50.993 }, 01:28:50.993 { 01:28:50.993 "name": "BaseBdev3", 01:28:50.993 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:50.993 "is_configured": true, 01:28:50.993 "data_offset": 2048, 01:28:50.993 "data_size": 63488 01:28:50.993 }, 01:28:50.993 { 01:28:50.993 "name": "BaseBdev4", 01:28:50.993 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:50.993 "is_configured": true, 01:28:50.993 "data_offset": 2048, 01:28:50.993 "data_size": 63488 01:28:50.993 } 01:28:50.993 ] 01:28:50.993 }' 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:50.993 05:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:51.560 "name": "raid_bdev1", 01:28:51.560 "uuid": "9aa0d077-09aa-4037-bf11-ce1a48527e8d", 01:28:51.560 "strip_size_kb": 0, 01:28:51.560 "state": "online", 01:28:51.560 "raid_level": "raid1", 01:28:51.560 "superblock": true, 01:28:51.560 "num_base_bdevs": 4, 01:28:51.560 "num_base_bdevs_discovered": 2, 01:28:51.560 "num_base_bdevs_operational": 2, 01:28:51.560 "base_bdevs_list": [ 01:28:51.560 { 01:28:51.560 "name": null, 01:28:51.560 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:51.560 "is_configured": false, 01:28:51.560 "data_offset": 0, 01:28:51.560 "data_size": 63488 01:28:51.560 }, 01:28:51.560 { 01:28:51.560 "name": null, 01:28:51.560 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:51.560 "is_configured": false, 01:28:51.560 "data_offset": 2048, 01:28:51.560 "data_size": 63488 01:28:51.560 }, 01:28:51.560 { 01:28:51.560 "name": "BaseBdev3", 01:28:51.560 "uuid": "d5772271-daaf-55d1-b5a9-e3ae1e029592", 01:28:51.560 "is_configured": true, 01:28:51.560 "data_offset": 2048, 01:28:51.560 "data_size": 63488 01:28:51.560 }, 01:28:51.560 { 01:28:51.560 "name": "BaseBdev4", 01:28:51.560 "uuid": "05f9e8c8-2a06-57d6-a714-a9e4baa45555", 01:28:51.560 "is_configured": true, 01:28:51.560 "data_offset": 2048, 01:28:51.560 "data_size": 63488 01:28:51.560 } 01:28:51.560 ] 01:28:51.560 }' 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:51.560 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78194 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78194 ']' 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78194 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78194 01:28:51.819 killing process with pid 78194 01:28:51.819 Received shutdown signal, test time was about 60.000000 seconds 01:28:51.819 01:28:51.819 Latency(us) 01:28:51.819 [2024-12-09T05:23:43.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:51.819 [2024-12-09T05:23:43.436Z] =================================================================================================================== 01:28:51.819 [2024-12-09T05:23:43.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78194' 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78194 01:28:51.819 [2024-12-09 05:23:43.237394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:28:51.819 05:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78194 01:28:51.819 [2024-12-09 05:23:43.237612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:28:51.819 [2024-12-09 05:23:43.237714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:28:51.819 [2024-12-09 05:23:43.237732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:28:52.386 [2024-12-09 05:23:43.698553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:28:53.321 05:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 01:28:53.321 01:28:53.321 real 0m29.961s 01:28:53.321 user 0m36.160s 01:28:53.321 sys 0m4.471s 01:28:53.321 05:23:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:53.321 ************************************ 01:28:53.321 END TEST raid_rebuild_test_sb 01:28:53.321 ************************************ 01:28:53.321 05:23:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:28:53.634 05:23:44 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 01:28:53.634 05:23:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:28:53.634 05:23:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:53.634 05:23:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:28:53.634 ************************************ 01:28:53.634 START TEST raid_rebuild_test_io 01:28:53.634 ************************************ 01:28:53.634 05:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 01:28:53.634 05:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:28:53.634 05:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 01:28:53.634 05:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 01:28:53.634 05:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 01:28:53.634 05:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78992 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78992 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78992 ']' 01:28:53.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:53.634 05:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:53.634 [2024-12-09 05:23:45.132167] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:53.634 [2024-12-09 05:23:45.132684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 01:28:53.634 Zero copy mechanism will not be used. 01:28:53.634 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78992 ] 01:28:53.891 [2024-12-09 05:23:45.331416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:53.891 [2024-12-09 05:23:45.506528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:54.149 [2024-12-09 05:23:45.727230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:28:54.149 [2024-12-09 05:23:45.727277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.713 BaseBdev1_malloc 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.713 [2024-12-09 05:23:46.275938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:28:54.713 [2024-12-09 05:23:46.276045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:54.713 [2024-12-09 05:23:46.276085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:28:54.713 [2024-12-09 05:23:46.276121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:54.713 [2024-12-09 05:23:46.279727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:54.713 BaseBdev1 01:28:54.713 [2024-12-09 05:23:46.280760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.713 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 BaseBdev2_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 [2024-12-09 05:23:46.340122] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:28:54.971 [2024-12-09 05:23:46.340403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:54.971 [2024-12-09 05:23:46.340467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:28:54.971 [2024-12-09 05:23:46.340493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:54.971 [2024-12-09 05:23:46.344290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:54.971 [2024-12-09 05:23:46.344424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:28:54.971 BaseBdev2 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 BaseBdev3_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 [2024-12-09 05:23:46.413099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:28:54.971 [2024-12-09 05:23:46.413208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:54.971 [2024-12-09 05:23:46.413261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:28:54.971 [2024-12-09 05:23:46.413292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:54.971 [2024-12-09 05:23:46.416778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:54.971 [2024-12-09 05:23:46.416841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:28:54.971 BaseBdev3 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 BaseBdev4_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 [2024-12-09 05:23:46.476141] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 01:28:54.971 [2024-12-09 05:23:46.476433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:54.971 [2024-12-09 05:23:46.476483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:28:54.971 [2024-12-09 05:23:46.476509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:54.971 [2024-12-09 05:23:46.480096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:54.971 [2024-12-09 05:23:46.480378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:28:54.971 BaseBdev4 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 spare_malloc 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 spare_delay 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 [2024-12-09 05:23:46.541292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:28:54.971 [2024-12-09 05:23:46.541423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:28:54.971 [2024-12-09 05:23:46.541454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:28:54.971 [2024-12-09 05:23:46.541474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:28:54.971 [2024-12-09 05:23:46.544608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:28:54.971 [2024-12-09 05:23:46.544676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:28:54.971 spare 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 [2024-12-09 05:23:46.553333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:28:54.971 [2024-12-09 05:23:46.556114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:28:54.971 [2024-12-09 05:23:46.556237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:28:54.971 [2024-12-09 05:23:46.556341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:28:54.971 [2024-12-09 05:23:46.556501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:28:54.971 [2024-12-09 05:23:46.556528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 01:28:54.971 [2024-12-09 05:23:46.556872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:28:54.971 [2024-12-09 05:23:46.557109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:28:54.971 [2024-12-09 05:23:46.557131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:28:54.971 [2024-12-09 05:23:46.557386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:54.971 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.228 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:55.228 "name": "raid_bdev1", 01:28:55.228 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:55.228 "strip_size_kb": 0, 01:28:55.228 "state": "online", 01:28:55.228 "raid_level": "raid1", 01:28:55.228 "superblock": false, 01:28:55.228 "num_base_bdevs": 4, 01:28:55.228 "num_base_bdevs_discovered": 4, 01:28:55.228 "num_base_bdevs_operational": 4, 01:28:55.228 "base_bdevs_list": [ 01:28:55.228 { 01:28:55.228 "name": "BaseBdev1", 01:28:55.228 "uuid": "450faced-38b7-5215-8ead-a6d874f3273f", 01:28:55.228 "is_configured": true, 01:28:55.228 "data_offset": 0, 01:28:55.228 "data_size": 65536 01:28:55.228 }, 01:28:55.228 { 01:28:55.228 "name": "BaseBdev2", 01:28:55.228 "uuid": "2001bebd-8788-504d-935f-9554c1b19681", 01:28:55.228 "is_configured": true, 01:28:55.228 "data_offset": 0, 01:28:55.228 "data_size": 65536 01:28:55.228 }, 01:28:55.228 { 01:28:55.228 "name": "BaseBdev3", 01:28:55.228 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:55.228 "is_configured": true, 01:28:55.228 "data_offset": 0, 01:28:55.228 "data_size": 65536 01:28:55.228 }, 01:28:55.228 { 01:28:55.228 "name": "BaseBdev4", 01:28:55.228 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:55.228 "is_configured": true, 01:28:55.228 "data_offset": 0, 01:28:55.228 "data_size": 65536 01:28:55.228 } 01:28:55.228 ] 01:28:55.228 }' 01:28:55.228 05:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:55.228 05:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:55.485 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:28:55.485 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.485 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:28:55.485 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:55.742 [2024-12-09 05:23:47.102139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:55.742 [2024-12-09 05:23:47.205695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:55.742 "name": "raid_bdev1", 01:28:55.742 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:55.742 "strip_size_kb": 0, 01:28:55.742 "state": "online", 01:28:55.742 "raid_level": "raid1", 01:28:55.742 "superblock": false, 01:28:55.742 "num_base_bdevs": 4, 01:28:55.742 "num_base_bdevs_discovered": 3, 01:28:55.742 "num_base_bdevs_operational": 3, 01:28:55.742 "base_bdevs_list": [ 01:28:55.742 { 01:28:55.742 "name": null, 01:28:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:55.742 "is_configured": false, 01:28:55.742 "data_offset": 0, 01:28:55.742 "data_size": 65536 01:28:55.742 }, 01:28:55.742 { 01:28:55.742 "name": "BaseBdev2", 01:28:55.742 "uuid": "2001bebd-8788-504d-935f-9554c1b19681", 01:28:55.742 "is_configured": true, 01:28:55.742 "data_offset": 0, 01:28:55.742 "data_size": 65536 01:28:55.742 }, 01:28:55.742 { 01:28:55.742 "name": "BaseBdev3", 01:28:55.742 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:55.742 "is_configured": true, 01:28:55.742 "data_offset": 0, 01:28:55.742 "data_size": 65536 01:28:55.742 }, 01:28:55.742 { 01:28:55.742 "name": "BaseBdev4", 01:28:55.742 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:55.742 "is_configured": true, 01:28:55.742 "data_offset": 0, 01:28:55.742 "data_size": 65536 01:28:55.742 } 01:28:55.742 ] 01:28:55.742 }' 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:55.742 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:55.742 [2024-12-09 05:23:47.342437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:28:55.742 I/O size of 3145728 is greater than zero copy threshold (65536). 01:28:55.742 Zero copy mechanism will not be used. 01:28:55.742 Running I/O for 60 seconds... 01:28:56.307 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:56.307 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.307 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:56.307 [2024-12-09 05:23:47.770532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:56.307 05:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.307 05:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 01:28:56.307 [2024-12-09 05:23:47.867530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 01:28:56.307 [2024-12-09 05:23:47.870294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:56.564 [2024-12-09 05:23:47.990916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:28:56.564 [2024-12-09 05:23:47.993113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:28:56.821 [2024-12-09 05:23:48.224928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:28:56.821 [2024-12-09 05:23:48.226208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:28:57.078 124.00 IOPS, 372.00 MiB/s [2024-12-09T05:23:48.695Z] [2024-12-09 05:23:48.622190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 01:28:57.336 [2024-12-09 05:23:48.764341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:57.336 "name": "raid_bdev1", 01:28:57.336 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:57.336 "strip_size_kb": 0, 01:28:57.336 "state": "online", 01:28:57.336 "raid_level": "raid1", 01:28:57.336 "superblock": false, 01:28:57.336 "num_base_bdevs": 4, 01:28:57.336 "num_base_bdevs_discovered": 4, 01:28:57.336 "num_base_bdevs_operational": 4, 01:28:57.336 "process": { 01:28:57.336 "type": "rebuild", 01:28:57.336 "target": "spare", 01:28:57.336 "progress": { 01:28:57.336 "blocks": 10240, 01:28:57.336 "percent": 15 01:28:57.336 } 01:28:57.336 }, 01:28:57.336 "base_bdevs_list": [ 01:28:57.336 { 01:28:57.336 "name": "spare", 01:28:57.336 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:28:57.336 "is_configured": true, 01:28:57.336 "data_offset": 0, 01:28:57.336 "data_size": 65536 01:28:57.336 }, 01:28:57.336 { 01:28:57.336 "name": "BaseBdev2", 01:28:57.336 "uuid": "2001bebd-8788-504d-935f-9554c1b19681", 01:28:57.336 "is_configured": true, 01:28:57.336 "data_offset": 0, 01:28:57.336 "data_size": 65536 01:28:57.336 }, 01:28:57.336 { 01:28:57.336 "name": "BaseBdev3", 01:28:57.336 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:57.336 "is_configured": true, 01:28:57.336 "data_offset": 0, 01:28:57.336 "data_size": 65536 01:28:57.336 }, 01:28:57.336 { 01:28:57.336 "name": "BaseBdev4", 01:28:57.336 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:57.336 "is_configured": true, 01:28:57.336 "data_offset": 0, 01:28:57.336 "data_size": 65536 01:28:57.336 } 01:28:57.336 ] 01:28:57.336 }' 01:28:57.336 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:57.595 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:57.595 05:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:57.595 [2024-12-09 05:23:49.024329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:57.595 [2024-12-09 05:23:49.035147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 01:28:57.595 [2024-12-09 05:23:49.139069] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:28:57.595 [2024-12-09 05:23:49.151585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:28:57.595 [2024-12-09 05:23:49.151890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:28:57.595 [2024-12-09 05:23:49.151919] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:28:57.595 [2024-12-09 05:23:49.175046] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:57.595 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:57.854 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:57.854 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:28:57.854 "name": "raid_bdev1", 01:28:57.854 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:57.854 "strip_size_kb": 0, 01:28:57.854 "state": "online", 01:28:57.854 "raid_level": "raid1", 01:28:57.854 "superblock": false, 01:28:57.854 "num_base_bdevs": 4, 01:28:57.854 "num_base_bdevs_discovered": 3, 01:28:57.854 "num_base_bdevs_operational": 3, 01:28:57.854 "base_bdevs_list": [ 01:28:57.854 { 01:28:57.854 "name": null, 01:28:57.854 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:57.854 "is_configured": false, 01:28:57.854 "data_offset": 0, 01:28:57.854 "data_size": 65536 01:28:57.854 }, 01:28:57.854 { 01:28:57.854 "name": "BaseBdev2", 01:28:57.854 "uuid": "2001bebd-8788-504d-935f-9554c1b19681", 01:28:57.854 "is_configured": true, 01:28:57.854 "data_offset": 0, 01:28:57.854 "data_size": 65536 01:28:57.854 }, 01:28:57.854 { 01:28:57.854 "name": "BaseBdev3", 01:28:57.854 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:57.854 "is_configured": true, 01:28:57.854 "data_offset": 0, 01:28:57.854 "data_size": 65536 01:28:57.854 }, 01:28:57.854 { 01:28:57.854 "name": "BaseBdev4", 01:28:57.854 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:57.854 "is_configured": true, 01:28:57.854 "data_offset": 0, 01:28:57.854 "data_size": 65536 01:28:57.854 } 01:28:57.854 ] 01:28:57.854 }' 01:28:57.854 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:28:57.854 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:58.421 112.00 IOPS, 336.00 MiB/s [2024-12-09T05:23:50.038Z] 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:58.421 "name": "raid_bdev1", 01:28:58.421 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:58.421 "strip_size_kb": 0, 01:28:58.421 "state": "online", 01:28:58.421 "raid_level": "raid1", 01:28:58.421 "superblock": false, 01:28:58.421 "num_base_bdevs": 4, 01:28:58.421 "num_base_bdevs_discovered": 3, 01:28:58.421 "num_base_bdevs_operational": 3, 01:28:58.421 "base_bdevs_list": [ 01:28:58.421 { 01:28:58.421 "name": null, 01:28:58.421 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:58.421 "is_configured": false, 01:28:58.421 "data_offset": 0, 01:28:58.421 "data_size": 65536 01:28:58.421 }, 01:28:58.421 { 01:28:58.421 "name": "BaseBdev2", 01:28:58.421 "uuid": "2001bebd-8788-504d-935f-9554c1b19681", 01:28:58.421 "is_configured": true, 01:28:58.421 "data_offset": 0, 01:28:58.421 "data_size": 65536 01:28:58.421 }, 01:28:58.421 { 01:28:58.421 "name": "BaseBdev3", 01:28:58.421 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:58.421 "is_configured": true, 01:28:58.421 "data_offset": 0, 01:28:58.421 "data_size": 65536 01:28:58.421 }, 01:28:58.421 { 01:28:58.421 "name": "BaseBdev4", 01:28:58.421 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:58.421 "is_configured": true, 01:28:58.421 "data_offset": 0, 01:28:58.421 "data_size": 65536 01:28:58.421 } 01:28:58.421 ] 01:28:58.421 }' 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:58.421 [2024-12-09 05:23:49.922127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.421 05:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 01:28:58.421 [2024-12-09 05:23:49.994036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:28:58.421 [2024-12-09 05:23:49.996960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:28:58.680 [2024-12-09 05:23:50.129960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:28:58.680 [2024-12-09 05:23:50.132374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:28:58.938 [2024-12-09 05:23:50.361509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:28:58.938 [2024-12-09 05:23:50.362829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:28:59.197 124.67 IOPS, 374.00 MiB/s [2024-12-09T05:23:50.814Z] [2024-12-09 05:23:50.717331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 01:28:59.455 [2024-12-09 05:23:50.941238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:28:59.455 [2024-12-09 05:23:50.942685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:59.455 05:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:59.455 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.455 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:59.455 "name": "raid_bdev1", 01:28:59.455 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:59.455 "strip_size_kb": 0, 01:28:59.455 "state": "online", 01:28:59.455 "raid_level": "raid1", 01:28:59.455 "superblock": false, 01:28:59.455 "num_base_bdevs": 4, 01:28:59.455 "num_base_bdevs_discovered": 4, 01:28:59.455 "num_base_bdevs_operational": 4, 01:28:59.455 "process": { 01:28:59.455 "type": "rebuild", 01:28:59.455 "target": "spare", 01:28:59.455 "progress": { 01:28:59.455 "blocks": 10240, 01:28:59.455 "percent": 15 01:28:59.455 } 01:28:59.455 }, 01:28:59.455 "base_bdevs_list": [ 01:28:59.455 { 01:28:59.455 "name": "spare", 01:28:59.455 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:28:59.455 "is_configured": true, 01:28:59.455 "data_offset": 0, 01:28:59.455 "data_size": 65536 01:28:59.455 }, 01:28:59.455 { 01:28:59.455 "name": "BaseBdev2", 01:28:59.455 "uuid": "2001bebd-8788-504d-935f-9554c1b19681", 01:28:59.455 "is_configured": true, 01:28:59.455 "data_offset": 0, 01:28:59.455 "data_size": 65536 01:28:59.455 }, 01:28:59.455 { 01:28:59.455 "name": "BaseBdev3", 01:28:59.455 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:59.455 "is_configured": true, 01:28:59.455 "data_offset": 0, 01:28:59.455 "data_size": 65536 01:28:59.455 }, 01:28:59.455 { 01:28:59.455 "name": "BaseBdev4", 01:28:59.455 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:59.455 "is_configured": true, 01:28:59.455 "data_offset": 0, 01:28:59.455 "data_size": 65536 01:28:59.455 } 01:28:59.455 ] 01:28:59.455 }' 01:28:59.455 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:59.714 [2024-12-09 05:23:51.155909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:28:59.714 [2024-12-09 05:23:51.284538] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 01:28:59.714 [2024-12-09 05:23:51.284590] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.714 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:59.972 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.972 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:59.972 "name": "raid_bdev1", 01:28:59.972 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:59.972 "strip_size_kb": 0, 01:28:59.972 "state": "online", 01:28:59.972 "raid_level": "raid1", 01:28:59.972 "superblock": false, 01:28:59.972 "num_base_bdevs": 4, 01:28:59.972 "num_base_bdevs_discovered": 3, 01:28:59.972 "num_base_bdevs_operational": 3, 01:28:59.972 "process": { 01:28:59.972 "type": "rebuild", 01:28:59.972 "target": "spare", 01:28:59.972 "progress": { 01:28:59.972 "blocks": 12288, 01:28:59.972 "percent": 18 01:28:59.972 } 01:28:59.972 }, 01:28:59.972 "base_bdevs_list": [ 01:28:59.972 { 01:28:59.972 "name": "spare", 01:28:59.972 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:28:59.972 "is_configured": true, 01:28:59.972 "data_offset": 0, 01:28:59.972 "data_size": 65536 01:28:59.972 }, 01:28:59.972 { 01:28:59.972 "name": null, 01:28:59.972 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:59.972 "is_configured": false, 01:28:59.972 "data_offset": 0, 01:28:59.972 "data_size": 65536 01:28:59.972 }, 01:28:59.972 { 01:28:59.972 "name": "BaseBdev3", 01:28:59.972 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:59.972 "is_configured": true, 01:28:59.972 "data_offset": 0, 01:28:59.972 "data_size": 65536 01:28:59.972 }, 01:28:59.972 { 01:28:59.972 "name": "BaseBdev4", 01:28:59.972 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:59.972 "is_configured": true, 01:28:59.972 "data_offset": 0, 01:28:59.972 "data_size": 65536 01:28:59.972 } 01:28:59.972 ] 01:28:59.972 }' 01:28:59.972 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:59.972 116.75 IOPS, 350.25 MiB/s [2024-12-09T05:23:51.589Z] 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:59.972 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=533 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:28:59.973 "name": "raid_bdev1", 01:28:59.973 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:28:59.973 "strip_size_kb": 0, 01:28:59.973 "state": "online", 01:28:59.973 "raid_level": "raid1", 01:28:59.973 "superblock": false, 01:28:59.973 "num_base_bdevs": 4, 01:28:59.973 "num_base_bdevs_discovered": 3, 01:28:59.973 "num_base_bdevs_operational": 3, 01:28:59.973 "process": { 01:28:59.973 "type": "rebuild", 01:28:59.973 "target": "spare", 01:28:59.973 "progress": { 01:28:59.973 "blocks": 14336, 01:28:59.973 "percent": 21 01:28:59.973 } 01:28:59.973 }, 01:28:59.973 "base_bdevs_list": [ 01:28:59.973 { 01:28:59.973 "name": "spare", 01:28:59.973 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:28:59.973 "is_configured": true, 01:28:59.973 "data_offset": 0, 01:28:59.973 "data_size": 65536 01:28:59.973 }, 01:28:59.973 { 01:28:59.973 "name": null, 01:28:59.973 "uuid": "00000000-0000-0000-0000-000000000000", 01:28:59.973 "is_configured": false, 01:28:59.973 "data_offset": 0, 01:28:59.973 "data_size": 65536 01:28:59.973 }, 01:28:59.973 { 01:28:59.973 "name": "BaseBdev3", 01:28:59.973 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:28:59.973 "is_configured": true, 01:28:59.973 "data_offset": 0, 01:28:59.973 "data_size": 65536 01:28:59.973 }, 01:28:59.973 { 01:28:59.973 "name": "BaseBdev4", 01:28:59.973 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:28:59.973 "is_configured": true, 01:28:59.973 "data_offset": 0, 01:28:59.973 "data_size": 65536 01:28:59.973 } 01:28:59.973 ] 01:28:59.973 }' 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:28:59.973 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:00.242 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:00.242 05:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:29:00.821 [2024-12-09 05:23:52.143028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 01:29:00.821 [2024-12-09 05:23:52.286991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 01:29:01.080 107.20 IOPS, 321.60 MiB/s [2024-12-09T05:23:52.697Z] [2024-12-09 05:23:52.518664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:01.080 05:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.338 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:01.338 "name": "raid_bdev1", 01:29:01.338 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:29:01.338 "strip_size_kb": 0, 01:29:01.338 "state": "online", 01:29:01.338 "raid_level": "raid1", 01:29:01.338 "superblock": false, 01:29:01.338 "num_base_bdevs": 4, 01:29:01.338 "num_base_bdevs_discovered": 3, 01:29:01.338 "num_base_bdevs_operational": 3, 01:29:01.338 "process": { 01:29:01.338 "type": "rebuild", 01:29:01.338 "target": "spare", 01:29:01.338 "progress": { 01:29:01.338 "blocks": 32768, 01:29:01.338 "percent": 50 01:29:01.338 } 01:29:01.338 }, 01:29:01.338 "base_bdevs_list": [ 01:29:01.338 { 01:29:01.338 "name": "spare", 01:29:01.338 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:29:01.338 "is_configured": true, 01:29:01.338 "data_offset": 0, 01:29:01.338 "data_size": 65536 01:29:01.338 }, 01:29:01.338 { 01:29:01.338 "name": null, 01:29:01.338 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:01.338 "is_configured": false, 01:29:01.338 "data_offset": 0, 01:29:01.338 "data_size": 65536 01:29:01.338 }, 01:29:01.338 { 01:29:01.338 "name": "BaseBdev3", 01:29:01.338 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:29:01.338 "is_configured": true, 01:29:01.338 "data_offset": 0, 01:29:01.338 "data_size": 65536 01:29:01.338 }, 01:29:01.338 { 01:29:01.338 "name": "BaseBdev4", 01:29:01.338 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:29:01.338 "is_configured": true, 01:29:01.338 "data_offset": 0, 01:29:01.338 "data_size": 65536 01:29:01.338 } 01:29:01.338 ] 01:29:01.338 }' 01:29:01.338 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:01.338 [2024-12-09 05:23:52.753083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 01:29:01.338 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:01.338 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:01.338 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:01.338 05:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:29:01.597 [2024-12-09 05:23:53.099517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 01:29:01.855 [2024-12-09 05:23:53.319238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 01:29:01.855 [2024-12-09 05:23:53.320070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 01:29:02.113 95.83 IOPS, 287.50 MiB/s [2024-12-09T05:23:53.730Z] [2024-12-09 05:23:53.684088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:02.371 "name": "raid_bdev1", 01:29:02.371 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:29:02.371 "strip_size_kb": 0, 01:29:02.371 "state": "online", 01:29:02.371 "raid_level": "raid1", 01:29:02.371 "superblock": false, 01:29:02.371 "num_base_bdevs": 4, 01:29:02.371 "num_base_bdevs_discovered": 3, 01:29:02.371 "num_base_bdevs_operational": 3, 01:29:02.371 "process": { 01:29:02.371 "type": "rebuild", 01:29:02.371 "target": "spare", 01:29:02.371 "progress": { 01:29:02.371 "blocks": 47104, 01:29:02.371 "percent": 71 01:29:02.371 } 01:29:02.371 }, 01:29:02.371 "base_bdevs_list": [ 01:29:02.371 { 01:29:02.371 "name": "spare", 01:29:02.371 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:29:02.371 "is_configured": true, 01:29:02.371 "data_offset": 0, 01:29:02.371 "data_size": 65536 01:29:02.371 }, 01:29:02.371 { 01:29:02.371 "name": null, 01:29:02.371 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:02.371 "is_configured": false, 01:29:02.371 "data_offset": 0, 01:29:02.371 "data_size": 65536 01:29:02.371 }, 01:29:02.371 { 01:29:02.371 "name": "BaseBdev3", 01:29:02.371 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:29:02.371 "is_configured": true, 01:29:02.371 "data_offset": 0, 01:29:02.371 "data_size": 65536 01:29:02.371 }, 01:29:02.371 { 01:29:02.371 "name": "BaseBdev4", 01:29:02.371 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:29:02.371 "is_configured": true, 01:29:02.371 "data_offset": 0, 01:29:02.371 "data_size": 65536 01:29:02.371 } 01:29:02.371 ] 01:29:02.371 }' 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:02.371 05:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:29:03.194 87.71 IOPS, 263.14 MiB/s [2024-12-09T05:23:54.811Z] [2024-12-09 05:23:54.799904] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:29:03.452 [2024-12-09 05:23:54.907807] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:29:03.452 [2024-12-09 05:23:54.912829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.452 05:23:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:03.452 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.452 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:03.452 "name": "raid_bdev1", 01:29:03.452 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:29:03.452 "strip_size_kb": 0, 01:29:03.452 "state": "online", 01:29:03.452 "raid_level": "raid1", 01:29:03.452 "superblock": false, 01:29:03.452 "num_base_bdevs": 4, 01:29:03.452 "num_base_bdevs_discovered": 3, 01:29:03.452 "num_base_bdevs_operational": 3, 01:29:03.452 "base_bdevs_list": [ 01:29:03.452 { 01:29:03.452 "name": "spare", 01:29:03.452 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:29:03.452 "is_configured": true, 01:29:03.452 "data_offset": 0, 01:29:03.452 "data_size": 65536 01:29:03.452 }, 01:29:03.452 { 01:29:03.452 "name": null, 01:29:03.452 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:03.452 "is_configured": false, 01:29:03.452 "data_offset": 0, 01:29:03.452 "data_size": 65536 01:29:03.452 }, 01:29:03.452 { 01:29:03.452 "name": "BaseBdev3", 01:29:03.452 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:29:03.452 "is_configured": true, 01:29:03.452 "data_offset": 0, 01:29:03.452 "data_size": 65536 01:29:03.452 }, 01:29:03.452 { 01:29:03.452 "name": "BaseBdev4", 01:29:03.452 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:29:03.452 "is_configured": true, 01:29:03.452 "data_offset": 0, 01:29:03.452 "data_size": 65536 01:29:03.452 } 01:29:03.452 ] 01:29:03.452 }' 01:29:03.452 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:03.711 "name": "raid_bdev1", 01:29:03.711 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:29:03.711 "strip_size_kb": 0, 01:29:03.711 "state": "online", 01:29:03.711 "raid_level": "raid1", 01:29:03.711 "superblock": false, 01:29:03.711 "num_base_bdevs": 4, 01:29:03.711 "num_base_bdevs_discovered": 3, 01:29:03.711 "num_base_bdevs_operational": 3, 01:29:03.711 "base_bdevs_list": [ 01:29:03.711 { 01:29:03.711 "name": "spare", 01:29:03.711 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:29:03.711 "is_configured": true, 01:29:03.711 "data_offset": 0, 01:29:03.711 "data_size": 65536 01:29:03.711 }, 01:29:03.711 { 01:29:03.711 "name": null, 01:29:03.711 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:03.711 "is_configured": false, 01:29:03.711 "data_offset": 0, 01:29:03.711 "data_size": 65536 01:29:03.711 }, 01:29:03.711 { 01:29:03.711 "name": "BaseBdev3", 01:29:03.711 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:29:03.711 "is_configured": true, 01:29:03.711 "data_offset": 0, 01:29:03.711 "data_size": 65536 01:29:03.711 }, 01:29:03.711 { 01:29:03.711 "name": "BaseBdev4", 01:29:03.711 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:29:03.711 "is_configured": true, 01:29:03.711 "data_offset": 0, 01:29:03.711 "data_size": 65536 01:29:03.711 } 01:29:03.711 ] 01:29:03.711 }' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.711 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:03.970 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.970 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:03.970 "name": "raid_bdev1", 01:29:03.970 "uuid": "7d23d1b3-0a67-4814-9f5e-15aa38aa08fb", 01:29:03.970 "strip_size_kb": 0, 01:29:03.970 "state": "online", 01:29:03.970 "raid_level": "raid1", 01:29:03.970 "superblock": false, 01:29:03.970 "num_base_bdevs": 4, 01:29:03.970 "num_base_bdevs_discovered": 3, 01:29:03.970 "num_base_bdevs_operational": 3, 01:29:03.970 "base_bdevs_list": [ 01:29:03.970 { 01:29:03.970 "name": "spare", 01:29:03.970 "uuid": "be0e33c7-ae56-5286-ae71-39838495da0b", 01:29:03.970 "is_configured": true, 01:29:03.970 "data_offset": 0, 01:29:03.970 "data_size": 65536 01:29:03.970 }, 01:29:03.970 { 01:29:03.970 "name": null, 01:29:03.970 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:03.970 "is_configured": false, 01:29:03.970 "data_offset": 0, 01:29:03.970 "data_size": 65536 01:29:03.970 }, 01:29:03.970 { 01:29:03.970 "name": "BaseBdev3", 01:29:03.970 "uuid": "02782383-8880-55e7-9b76-baf0ec001928", 01:29:03.970 "is_configured": true, 01:29:03.970 "data_offset": 0, 01:29:03.970 "data_size": 65536 01:29:03.970 }, 01:29:03.970 { 01:29:03.970 "name": "BaseBdev4", 01:29:03.970 "uuid": "fa4f2bea-e482-5f27-9618-ab0479f627b3", 01:29:03.970 "is_configured": true, 01:29:03.970 "data_offset": 0, 01:29:03.970 "data_size": 65536 01:29:03.970 } 01:29:03.970 ] 01:29:03.970 }' 01:29:03.970 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:03.970 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:04.229 81.00 IOPS, 243.00 MiB/s [2024-12-09T05:23:55.846Z] 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:29:04.229 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:04.229 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:04.229 [2024-12-09 05:23:55.807161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:29:04.229 [2024-12-09 05:23:55.807200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:04.488 01:29:04.488 Latency(us) 01:29:04.488 [2024-12-09T05:23:56.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:04.488 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 01:29:04.488 raid_bdev1 : 8.56 77.67 233.01 0.00 0.00 17479.72 269.96 115819.99 01:29:04.488 [2024-12-09T05:23:56.105Z] =================================================================================================================== 01:29:04.488 [2024-12-09T05:23:56.105Z] Total : 77.67 233.01 0.00 0.00 17479.72 269.96 115819.99 01:29:04.488 [2024-12-09 05:23:55.926480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:04.488 { 01:29:04.488 "results": [ 01:29:04.488 { 01:29:04.488 "job": "raid_bdev1", 01:29:04.488 "core_mask": "0x1", 01:29:04.488 "workload": "randrw", 01:29:04.488 "percentage": 50, 01:29:04.488 "status": "finished", 01:29:04.488 "queue_depth": 2, 01:29:04.488 "io_size": 3145728, 01:29:04.488 "runtime": 8.561892, 01:29:04.488 "iops": 77.66974869573221, 01:29:04.488 "mibps": 233.00924608719663, 01:29:04.488 "io_failed": 0, 01:29:04.488 "io_timeout": 0, 01:29:04.488 "avg_latency_us": 17479.720596035542, 01:29:04.488 "min_latency_us": 269.96363636363634, 01:29:04.488 "max_latency_us": 115819.98545454546 01:29:04.488 } 01:29:04.488 ], 01:29:04.488 "core_count": 1 01:29:04.488 } 01:29:04.488 [2024-12-09 05:23:55.926766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:04.488 [2024-12-09 05:23:55.926936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:29:04.488 [2024-12-09 05:23:55.926960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:04.488 05:23:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 01:29:04.745 /dev/nbd0 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:29:04.745 1+0 records in 01:29:04.745 1+0 records out 01:29:04.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393433 s, 10.4 MB/s 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 01:29:04.745 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:05.003 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:29:05.003 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 01:29:05.003 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:29:05.003 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:05.003 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:05.004 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 01:29:05.262 /dev/nbd1 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:29:05.262 1+0 records in 01:29:05.262 1+0 records out 01:29:05.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834862 s, 4.9 MB/s 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:05.262 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:29:05.521 05:23:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:29:05.779 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:05.780 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 01:29:06.039 /dev/nbd1 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:29:06.039 1+0 records in 01:29:06.039 1+0 records out 01:29:06.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062051 s, 6.6 MB/s 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:06.039 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:29:06.297 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 01:29:06.297 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:29:06.297 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 01:29:06.297 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:29:06.297 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 01:29:06.298 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:29:06.298 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:29:06.556 05:23:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78992 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78992 ']' 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78992 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78992 01:29:06.815 killing process with pid 78992 01:29:06.815 Received shutdown signal, test time was about 10.973193 seconds 01:29:06.815 01:29:06.815 Latency(us) 01:29:06.815 [2024-12-09T05:23:58.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:06.815 [2024-12-09T05:23:58.432Z] =================================================================================================================== 01:29:06.815 [2024-12-09T05:23:58.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78992' 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78992 01:29:06.815 [2024-12-09 05:23:58.318425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:29:06.815 05:23:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78992 01:29:07.074 [2024-12-09 05:23:58.670920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 01:29:08.451 01:29:08.451 real 0m14.773s 01:29:08.451 user 0m19.566s 01:29:08.451 sys 0m1.900s 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:08.451 ************************************ 01:29:08.451 END TEST raid_rebuild_test_io 01:29:08.451 ************************************ 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 01:29:08.451 05:23:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 01:29:08.451 05:23:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:29:08.451 05:23:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:08.451 05:23:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:29:08.451 ************************************ 01:29:08.451 START TEST raid_rebuild_test_sb_io 01:29:08.451 ************************************ 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79419 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79419 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79419 ']' 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:29:08.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:08.451 05:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:08.451 I/O size of 3145728 is greater than zero copy threshold (65536). 01:29:08.451 Zero copy mechanism will not be used. 01:29:08.451 [2024-12-09 05:23:59.952823] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:08.451 [2024-12-09 05:23:59.952992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79419 ] 01:29:08.710 [2024-12-09 05:24:00.139026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:08.710 [2024-12-09 05:24:00.258789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:08.968 [2024-12-09 05:24:00.447581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:08.969 [2024-12-09 05:24:00.447626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:09.534 05:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:09.534 05:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 01:29:09.534 05:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:29:09.534 05:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:29:09.534 05:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.534 BaseBdev1_malloc 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.534 [2024-12-09 05:24:01.018898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:29:09.534 [2024-12-09 05:24:01.019168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:09.534 [2024-12-09 05:24:01.019236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:29:09.534 [2024-12-09 05:24:01.019271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:09.534 [2024-12-09 05:24:01.022163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:09.534 [2024-12-09 05:24:01.022389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:29:09.534 BaseBdev1 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.534 BaseBdev2_malloc 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.534 [2024-12-09 05:24:01.067689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:29:09.534 [2024-12-09 05:24:01.067770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:09.534 [2024-12-09 05:24:01.067806] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:29:09.534 [2024-12-09 05:24:01.067826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:09.534 [2024-12-09 05:24:01.070685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:09.534 [2024-12-09 05:24:01.070739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:29:09.534 BaseBdev2 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.534 BaseBdev3_malloc 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.534 [2024-12-09 05:24:01.129881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:29:09.534 [2024-12-09 05:24:01.129951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:09.534 [2024-12-09 05:24:01.129985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:29:09.534 [2024-12-09 05:24:01.130010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:09.534 [2024-12-09 05:24:01.132856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:09.534 [2024-12-09 05:24:01.133067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:29:09.534 BaseBdev3 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.534 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.792 BaseBdev4_malloc 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.792 [2024-12-09 05:24:01.178634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 01:29:09.792 [2024-12-09 05:24:01.178727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:09.792 [2024-12-09 05:24:01.178778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:29:09.792 [2024-12-09 05:24:01.178807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:09.792 [2024-12-09 05:24:01.181721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:09.792 [2024-12-09 05:24:01.181784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:29:09.792 BaseBdev4 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.792 spare_malloc 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.792 spare_delay 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.792 [2024-12-09 05:24:01.235063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:29:09.792 [2024-12-09 05:24:01.235140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:09.792 [2024-12-09 05:24:01.235181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:29:09.792 [2024-12-09 05:24:01.235211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:09.792 [2024-12-09 05:24:01.238145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:09.792 [2024-12-09 05:24:01.238212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:29:09.792 spare 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.792 [2024-12-09 05:24:01.243183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:09.792 [2024-12-09 05:24:01.245741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:09.792 [2024-12-09 05:24:01.245855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:09.792 [2024-12-09 05:24:01.245951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:29:09.792 [2024-12-09 05:24:01.246210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:29:09.792 [2024-12-09 05:24:01.246234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:29:09.792 [2024-12-09 05:24:01.246610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:29:09.792 [2024-12-09 05:24:01.246852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:29:09.792 [2024-12-09 05:24:01.246877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:29:09.792 [2024-12-09 05:24:01.247138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:09.792 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:09.793 "name": "raid_bdev1", 01:29:09.793 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:09.793 "strip_size_kb": 0, 01:29:09.793 "state": "online", 01:29:09.793 "raid_level": "raid1", 01:29:09.793 "superblock": true, 01:29:09.793 "num_base_bdevs": 4, 01:29:09.793 "num_base_bdevs_discovered": 4, 01:29:09.793 "num_base_bdevs_operational": 4, 01:29:09.793 "base_bdevs_list": [ 01:29:09.793 { 01:29:09.793 "name": "BaseBdev1", 01:29:09.793 "uuid": "a6143e36-472d-542d-81fe-7a842e30d96f", 01:29:09.793 "is_configured": true, 01:29:09.793 "data_offset": 2048, 01:29:09.793 "data_size": 63488 01:29:09.793 }, 01:29:09.793 { 01:29:09.793 "name": "BaseBdev2", 01:29:09.793 "uuid": "5050816f-8791-5362-bddf-8a708b2d7989", 01:29:09.793 "is_configured": true, 01:29:09.793 "data_offset": 2048, 01:29:09.793 "data_size": 63488 01:29:09.793 }, 01:29:09.793 { 01:29:09.793 "name": "BaseBdev3", 01:29:09.793 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:09.793 "is_configured": true, 01:29:09.793 "data_offset": 2048, 01:29:09.793 "data_size": 63488 01:29:09.793 }, 01:29:09.793 { 01:29:09.793 "name": "BaseBdev4", 01:29:09.793 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:09.793 "is_configured": true, 01:29:09.793 "data_offset": 2048, 01:29:09.793 "data_size": 63488 01:29:09.793 } 01:29:09.793 ] 01:29:09.793 }' 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:09.793 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.359 [2024-12-09 05:24:01.743836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.359 [2024-12-09 05:24:01.851392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:10.359 "name": "raid_bdev1", 01:29:10.359 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:10.359 "strip_size_kb": 0, 01:29:10.359 "state": "online", 01:29:10.359 "raid_level": "raid1", 01:29:10.359 "superblock": true, 01:29:10.359 "num_base_bdevs": 4, 01:29:10.359 "num_base_bdevs_discovered": 3, 01:29:10.359 "num_base_bdevs_operational": 3, 01:29:10.359 "base_bdevs_list": [ 01:29:10.359 { 01:29:10.359 "name": null, 01:29:10.359 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:10.359 "is_configured": false, 01:29:10.359 "data_offset": 0, 01:29:10.359 "data_size": 63488 01:29:10.359 }, 01:29:10.359 { 01:29:10.359 "name": "BaseBdev2", 01:29:10.359 "uuid": "5050816f-8791-5362-bddf-8a708b2d7989", 01:29:10.359 "is_configured": true, 01:29:10.359 "data_offset": 2048, 01:29:10.359 "data_size": 63488 01:29:10.359 }, 01:29:10.359 { 01:29:10.359 "name": "BaseBdev3", 01:29:10.359 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:10.359 "is_configured": true, 01:29:10.359 "data_offset": 2048, 01:29:10.359 "data_size": 63488 01:29:10.359 }, 01:29:10.359 { 01:29:10.359 "name": "BaseBdev4", 01:29:10.359 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:10.359 "is_configured": true, 01:29:10.359 "data_offset": 2048, 01:29:10.359 "data_size": 63488 01:29:10.359 } 01:29:10.359 ] 01:29:10.359 }' 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:10.359 05:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.617 [2024-12-09 05:24:01.990305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:29:10.617 I/O size of 3145728 is greater than zero copy threshold (65536). 01:29:10.617 Zero copy mechanism will not be used. 01:29:10.617 Running I/O for 60 seconds... 01:29:10.875 05:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:29:10.875 05:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.875 05:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:10.875 [2024-12-09 05:24:02.433763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:29:10.875 05:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.875 05:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 01:29:10.875 [2024-12-09 05:24:02.484947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 01:29:10.875 [2024-12-09 05:24:02.487802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:29:11.133 [2024-12-09 05:24:02.623658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:29:11.133 [2024-12-09 05:24:02.625759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:29:11.391 [2024-12-09 05:24:02.858706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:29:11.391 [2024-12-09 05:24:02.859725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:29:11.649 112.00 IOPS, 336.00 MiB/s [2024-12-09T05:24:03.266Z] [2024-12-09 05:24:03.237069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 01:29:11.906 [2024-12-09 05:24:03.459460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:29:11.906 [2024-12-09 05:24:03.460060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:11.906 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:12.165 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.165 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:12.165 "name": "raid_bdev1", 01:29:12.165 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:12.165 "strip_size_kb": 0, 01:29:12.165 "state": "online", 01:29:12.165 "raid_level": "raid1", 01:29:12.165 "superblock": true, 01:29:12.165 "num_base_bdevs": 4, 01:29:12.165 "num_base_bdevs_discovered": 4, 01:29:12.165 "num_base_bdevs_operational": 4, 01:29:12.165 "process": { 01:29:12.165 "type": "rebuild", 01:29:12.165 "target": "spare", 01:29:12.165 "progress": { 01:29:12.165 "blocks": 10240, 01:29:12.166 "percent": 16 01:29:12.166 } 01:29:12.166 }, 01:29:12.166 "base_bdevs_list": [ 01:29:12.166 { 01:29:12.166 "name": "spare", 01:29:12.166 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:12.166 "is_configured": true, 01:29:12.166 "data_offset": 2048, 01:29:12.166 "data_size": 63488 01:29:12.166 }, 01:29:12.166 { 01:29:12.166 "name": "BaseBdev2", 01:29:12.166 "uuid": "5050816f-8791-5362-bddf-8a708b2d7989", 01:29:12.166 "is_configured": true, 01:29:12.166 "data_offset": 2048, 01:29:12.166 "data_size": 63488 01:29:12.166 }, 01:29:12.166 { 01:29:12.166 "name": "BaseBdev3", 01:29:12.166 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:12.166 "is_configured": true, 01:29:12.166 "data_offset": 2048, 01:29:12.166 "data_size": 63488 01:29:12.166 }, 01:29:12.166 { 01:29:12.166 "name": "BaseBdev4", 01:29:12.166 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:12.166 "is_configured": true, 01:29:12.166 "data_offset": 2048, 01:29:12.166 "data_size": 63488 01:29:12.166 } 01:29:12.166 ] 01:29:12.166 }' 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.166 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:12.166 [2024-12-09 05:24:03.655880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:12.424 [2024-12-09 05:24:03.804622] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:29:12.424 [2024-12-09 05:24:03.820535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:12.424 [2024-12-09 05:24:03.820844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:12.424 [2024-12-09 05:24:03.820910] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:29:12.424 [2024-12-09 05:24:03.856465] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:12.424 "name": "raid_bdev1", 01:29:12.424 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:12.424 "strip_size_kb": 0, 01:29:12.424 "state": "online", 01:29:12.424 "raid_level": "raid1", 01:29:12.424 "superblock": true, 01:29:12.424 "num_base_bdevs": 4, 01:29:12.424 "num_base_bdevs_discovered": 3, 01:29:12.424 "num_base_bdevs_operational": 3, 01:29:12.424 "base_bdevs_list": [ 01:29:12.424 { 01:29:12.424 "name": null, 01:29:12.424 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:12.424 "is_configured": false, 01:29:12.424 "data_offset": 0, 01:29:12.424 "data_size": 63488 01:29:12.424 }, 01:29:12.424 { 01:29:12.424 "name": "BaseBdev2", 01:29:12.424 "uuid": "5050816f-8791-5362-bddf-8a708b2d7989", 01:29:12.424 "is_configured": true, 01:29:12.424 "data_offset": 2048, 01:29:12.424 "data_size": 63488 01:29:12.424 }, 01:29:12.424 { 01:29:12.424 "name": "BaseBdev3", 01:29:12.424 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:12.424 "is_configured": true, 01:29:12.424 "data_offset": 2048, 01:29:12.424 "data_size": 63488 01:29:12.424 }, 01:29:12.424 { 01:29:12.424 "name": "BaseBdev4", 01:29:12.424 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:12.424 "is_configured": true, 01:29:12.424 "data_offset": 2048, 01:29:12.424 "data_size": 63488 01:29:12.424 } 01:29:12.424 ] 01:29:12.424 }' 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:12.424 05:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:12.992 95.50 IOPS, 286.50 MiB/s [2024-12-09T05:24:04.609Z] 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:12.992 "name": "raid_bdev1", 01:29:12.992 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:12.992 "strip_size_kb": 0, 01:29:12.992 "state": "online", 01:29:12.992 "raid_level": "raid1", 01:29:12.992 "superblock": true, 01:29:12.992 "num_base_bdevs": 4, 01:29:12.992 "num_base_bdevs_discovered": 3, 01:29:12.992 "num_base_bdevs_operational": 3, 01:29:12.992 "base_bdevs_list": [ 01:29:12.992 { 01:29:12.992 "name": null, 01:29:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:12.992 "is_configured": false, 01:29:12.992 "data_offset": 0, 01:29:12.992 "data_size": 63488 01:29:12.992 }, 01:29:12.992 { 01:29:12.992 "name": "BaseBdev2", 01:29:12.992 "uuid": "5050816f-8791-5362-bddf-8a708b2d7989", 01:29:12.992 "is_configured": true, 01:29:12.992 "data_offset": 2048, 01:29:12.992 "data_size": 63488 01:29:12.992 }, 01:29:12.992 { 01:29:12.992 "name": "BaseBdev3", 01:29:12.992 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:12.992 "is_configured": true, 01:29:12.992 "data_offset": 2048, 01:29:12.992 "data_size": 63488 01:29:12.992 }, 01:29:12.992 { 01:29:12.992 "name": "BaseBdev4", 01:29:12.992 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:12.992 "is_configured": true, 01:29:12.992 "data_offset": 2048, 01:29:12.992 "data_size": 63488 01:29:12.992 } 01:29:12.992 ] 01:29:12.992 }' 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.992 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:13.249 [2024-12-09 05:24:04.609135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:29:13.249 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:13.249 05:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 01:29:13.249 [2024-12-09 05:24:04.663223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:29:13.249 [2024-12-09 05:24:04.665894] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:29:13.249 [2024-12-09 05:24:04.776073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:29:13.249 [2024-12-09 05:24:04.776809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 01:29:13.506 [2024-12-09 05:24:04.946620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:29:13.506 [2024-12-09 05:24:04.947765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 01:29:14.072 111.00 IOPS, 333.00 MiB/s [2024-12-09T05:24:05.689Z] 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:14.072 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:14.330 "name": "raid_bdev1", 01:29:14.330 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:14.330 "strip_size_kb": 0, 01:29:14.330 "state": "online", 01:29:14.330 "raid_level": "raid1", 01:29:14.330 "superblock": true, 01:29:14.330 "num_base_bdevs": 4, 01:29:14.330 "num_base_bdevs_discovered": 4, 01:29:14.330 "num_base_bdevs_operational": 4, 01:29:14.330 "process": { 01:29:14.330 "type": "rebuild", 01:29:14.330 "target": "spare", 01:29:14.330 "progress": { 01:29:14.330 "blocks": 12288, 01:29:14.330 "percent": 19 01:29:14.330 } 01:29:14.330 }, 01:29:14.330 "base_bdevs_list": [ 01:29:14.330 { 01:29:14.330 "name": "spare", 01:29:14.330 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:14.330 "is_configured": true, 01:29:14.330 "data_offset": 2048, 01:29:14.330 "data_size": 63488 01:29:14.330 }, 01:29:14.330 { 01:29:14.330 "name": "BaseBdev2", 01:29:14.330 "uuid": "5050816f-8791-5362-bddf-8a708b2d7989", 01:29:14.330 "is_configured": true, 01:29:14.330 "data_offset": 2048, 01:29:14.330 "data_size": 63488 01:29:14.330 }, 01:29:14.330 { 01:29:14.330 "name": "BaseBdev3", 01:29:14.330 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:14.330 "is_configured": true, 01:29:14.330 "data_offset": 2048, 01:29:14.330 "data_size": 63488 01:29:14.330 }, 01:29:14.330 { 01:29:14.330 "name": "BaseBdev4", 01:29:14.330 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:14.330 "is_configured": true, 01:29:14.330 "data_offset": 2048, 01:29:14.330 "data_size": 63488 01:29:14.330 } 01:29:14.330 ] 01:29:14.330 }' 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:29:14.330 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.330 05:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:14.330 [2024-12-09 05:24:05.803614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:29:14.330 [2024-12-09 05:24:05.891713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 01:29:14.588 99.25 IOPS, 297.75 MiB/s [2024-12-09T05:24:06.206Z] [2024-12-09 05:24:06.095101] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 01:29:14.589 [2024-12-09 05:24:06.095178] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 01:29:14.589 [2024-12-09 05:24:06.105707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:14.589 "name": "raid_bdev1", 01:29:14.589 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:14.589 "strip_size_kb": 0, 01:29:14.589 "state": "online", 01:29:14.589 "raid_level": "raid1", 01:29:14.589 "superblock": true, 01:29:14.589 "num_base_bdevs": 4, 01:29:14.589 "num_base_bdevs_discovered": 3, 01:29:14.589 "num_base_bdevs_operational": 3, 01:29:14.589 "process": { 01:29:14.589 "type": "rebuild", 01:29:14.589 "target": "spare", 01:29:14.589 "progress": { 01:29:14.589 "blocks": 16384, 01:29:14.589 "percent": 25 01:29:14.589 } 01:29:14.589 }, 01:29:14.589 "base_bdevs_list": [ 01:29:14.589 { 01:29:14.589 "name": "spare", 01:29:14.589 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:14.589 "is_configured": true, 01:29:14.589 "data_offset": 2048, 01:29:14.589 "data_size": 63488 01:29:14.589 }, 01:29:14.589 { 01:29:14.589 "name": null, 01:29:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:14.589 "is_configured": false, 01:29:14.589 "data_offset": 0, 01:29:14.589 "data_size": 63488 01:29:14.589 }, 01:29:14.589 { 01:29:14.589 "name": "BaseBdev3", 01:29:14.589 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:14.589 "is_configured": true, 01:29:14.589 "data_offset": 2048, 01:29:14.589 "data_size": 63488 01:29:14.589 }, 01:29:14.589 { 01:29:14.589 "name": "BaseBdev4", 01:29:14.589 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:14.589 "is_configured": true, 01:29:14.589 "data_offset": 2048, 01:29:14.589 "data_size": 63488 01:29:14.589 } 01:29:14.589 ] 01:29:14.589 }' 01:29:14.589 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=548 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.847 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:14.847 "name": "raid_bdev1", 01:29:14.847 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:14.847 "strip_size_kb": 0, 01:29:14.847 "state": "online", 01:29:14.847 "raid_level": "raid1", 01:29:14.847 "superblock": true, 01:29:14.847 "num_base_bdevs": 4, 01:29:14.847 "num_base_bdevs_discovered": 3, 01:29:14.847 "num_base_bdevs_operational": 3, 01:29:14.847 "process": { 01:29:14.847 "type": "rebuild", 01:29:14.847 "target": "spare", 01:29:14.847 "progress": { 01:29:14.847 "blocks": 18432, 01:29:14.847 "percent": 29 01:29:14.847 } 01:29:14.847 }, 01:29:14.847 "base_bdevs_list": [ 01:29:14.847 { 01:29:14.847 "name": "spare", 01:29:14.847 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:14.847 "is_configured": true, 01:29:14.847 "data_offset": 2048, 01:29:14.847 "data_size": 63488 01:29:14.847 }, 01:29:14.847 { 01:29:14.847 "name": null, 01:29:14.847 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:14.847 "is_configured": false, 01:29:14.847 "data_offset": 0, 01:29:14.847 "data_size": 63488 01:29:14.847 }, 01:29:14.847 { 01:29:14.847 "name": "BaseBdev3", 01:29:14.847 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:14.847 "is_configured": true, 01:29:14.847 "data_offset": 2048, 01:29:14.847 "data_size": 63488 01:29:14.847 }, 01:29:14.847 { 01:29:14.847 "name": "BaseBdev4", 01:29:14.847 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:14.847 "is_configured": true, 01:29:14.847 "data_offset": 2048, 01:29:14.847 "data_size": 63488 01:29:14.847 } 01:29:14.848 ] 01:29:14.848 }' 01:29:14.848 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:14.848 [2024-12-09 05:24:06.349646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 01:29:14.848 [2024-12-09 05:24:06.351252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 01:29:14.848 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:14.848 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:14.848 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:14.848 05:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:29:15.106 [2024-12-09 05:24:06.567398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 01:29:15.680 90.60 IOPS, 271.80 MiB/s [2024-12-09T05:24:07.297Z] [2024-12-09 05:24:07.045893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 01:29:15.939 [2024-12-09 05:24:07.376533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 01:29:15.939 [2024-12-09 05:24:07.378022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:15.939 "name": "raid_bdev1", 01:29:15.939 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:15.939 "strip_size_kb": 0, 01:29:15.939 "state": "online", 01:29:15.939 "raid_level": "raid1", 01:29:15.939 "superblock": true, 01:29:15.939 "num_base_bdevs": 4, 01:29:15.939 "num_base_bdevs_discovered": 3, 01:29:15.939 "num_base_bdevs_operational": 3, 01:29:15.939 "process": { 01:29:15.939 "type": "rebuild", 01:29:15.939 "target": "spare", 01:29:15.939 "progress": { 01:29:15.939 "blocks": 32768, 01:29:15.939 "percent": 51 01:29:15.939 } 01:29:15.939 }, 01:29:15.939 "base_bdevs_list": [ 01:29:15.939 { 01:29:15.939 "name": "spare", 01:29:15.939 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:15.939 "is_configured": true, 01:29:15.939 "data_offset": 2048, 01:29:15.939 "data_size": 63488 01:29:15.939 }, 01:29:15.939 { 01:29:15.939 "name": null, 01:29:15.939 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:15.939 "is_configured": false, 01:29:15.939 "data_offset": 0, 01:29:15.939 "data_size": 63488 01:29:15.939 }, 01:29:15.939 { 01:29:15.939 "name": "BaseBdev3", 01:29:15.939 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:15.939 "is_configured": true, 01:29:15.939 "data_offset": 2048, 01:29:15.939 "data_size": 63488 01:29:15.939 }, 01:29:15.939 { 01:29:15.939 "name": "BaseBdev4", 01:29:15.939 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:15.939 "is_configured": true, 01:29:15.939 "data_offset": 2048, 01:29:15.939 "data_size": 63488 01:29:15.939 } 01:29:15.939 ] 01:29:15.939 }' 01:29:15.939 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:16.198 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:16.198 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:16.198 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:16.198 05:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:29:16.716 83.83 IOPS, 251.50 MiB/s [2024-12-09T05:24:08.333Z] [2024-12-09 05:24:08.203142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:17.283 "name": "raid_bdev1", 01:29:17.283 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:17.283 "strip_size_kb": 0, 01:29:17.283 "state": "online", 01:29:17.283 "raid_level": "raid1", 01:29:17.283 "superblock": true, 01:29:17.283 "num_base_bdevs": 4, 01:29:17.283 "num_base_bdevs_discovered": 3, 01:29:17.283 "num_base_bdevs_operational": 3, 01:29:17.283 "process": { 01:29:17.283 "type": "rebuild", 01:29:17.283 "target": "spare", 01:29:17.283 "progress": { 01:29:17.283 "blocks": 49152, 01:29:17.283 "percent": 77 01:29:17.283 } 01:29:17.283 }, 01:29:17.283 "base_bdevs_list": [ 01:29:17.283 { 01:29:17.283 "name": "spare", 01:29:17.283 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:17.283 "is_configured": true, 01:29:17.283 "data_offset": 2048, 01:29:17.283 "data_size": 63488 01:29:17.283 }, 01:29:17.283 { 01:29:17.283 "name": null, 01:29:17.283 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:17.283 "is_configured": false, 01:29:17.283 "data_offset": 0, 01:29:17.283 "data_size": 63488 01:29:17.283 }, 01:29:17.283 { 01:29:17.283 "name": "BaseBdev3", 01:29:17.283 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:17.283 "is_configured": true, 01:29:17.283 "data_offset": 2048, 01:29:17.283 "data_size": 63488 01:29:17.283 }, 01:29:17.283 { 01:29:17.283 "name": "BaseBdev4", 01:29:17.283 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:17.283 "is_configured": true, 01:29:17.283 "data_offset": 2048, 01:29:17.283 "data_size": 63488 01:29:17.283 } 01:29:17.283 ] 01:29:17.283 }' 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:17.283 05:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 01:29:17.542 77.14 IOPS, 231.43 MiB/s [2024-12-09T05:24:09.159Z] [2024-12-09 05:24:09.069754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 01:29:17.800 [2024-12-09 05:24:09.311486] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:29:18.059 [2024-12-09 05:24:09.418602] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:29:18.059 [2024-12-09 05:24:09.422317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:18.318 "name": "raid_bdev1", 01:29:18.318 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:18.318 "strip_size_kb": 0, 01:29:18.318 "state": "online", 01:29:18.318 "raid_level": "raid1", 01:29:18.318 "superblock": true, 01:29:18.318 "num_base_bdevs": 4, 01:29:18.318 "num_base_bdevs_discovered": 3, 01:29:18.318 "num_base_bdevs_operational": 3, 01:29:18.318 "base_bdevs_list": [ 01:29:18.318 { 01:29:18.318 "name": "spare", 01:29:18.318 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:18.318 "is_configured": true, 01:29:18.318 "data_offset": 2048, 01:29:18.318 "data_size": 63488 01:29:18.318 }, 01:29:18.318 { 01:29:18.318 "name": null, 01:29:18.318 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:18.318 "is_configured": false, 01:29:18.318 "data_offset": 0, 01:29:18.318 "data_size": 63488 01:29:18.318 }, 01:29:18.318 { 01:29:18.318 "name": "BaseBdev3", 01:29:18.318 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:18.318 "is_configured": true, 01:29:18.318 "data_offset": 2048, 01:29:18.318 "data_size": 63488 01:29:18.318 }, 01:29:18.318 { 01:29:18.318 "name": "BaseBdev4", 01:29:18.318 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:18.318 "is_configured": true, 01:29:18.318 "data_offset": 2048, 01:29:18.318 "data_size": 63488 01:29:18.318 } 01:29:18.318 ] 01:29:18.318 }' 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:29:18.318 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:18.577 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:18.578 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.578 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:18.578 05:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:18.578 "name": "raid_bdev1", 01:29:18.578 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:18.578 "strip_size_kb": 0, 01:29:18.578 "state": "online", 01:29:18.578 "raid_level": "raid1", 01:29:18.578 "superblock": true, 01:29:18.578 "num_base_bdevs": 4, 01:29:18.578 "num_base_bdevs_discovered": 3, 01:29:18.578 "num_base_bdevs_operational": 3, 01:29:18.578 "base_bdevs_list": [ 01:29:18.578 { 01:29:18.578 "name": "spare", 01:29:18.578 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:18.578 "is_configured": true, 01:29:18.578 "data_offset": 2048, 01:29:18.578 "data_size": 63488 01:29:18.578 }, 01:29:18.578 { 01:29:18.578 "name": null, 01:29:18.578 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:18.578 "is_configured": false, 01:29:18.578 "data_offset": 0, 01:29:18.578 "data_size": 63488 01:29:18.578 }, 01:29:18.578 { 01:29:18.578 "name": "BaseBdev3", 01:29:18.578 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:18.578 "is_configured": true, 01:29:18.578 "data_offset": 2048, 01:29:18.578 "data_size": 63488 01:29:18.578 }, 01:29:18.578 { 01:29:18.578 "name": "BaseBdev4", 01:29:18.578 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:18.578 "is_configured": true, 01:29:18.578 "data_offset": 2048, 01:29:18.578 "data_size": 63488 01:29:18.578 } 01:29:18.578 ] 01:29:18.578 }' 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:18.578 71.50 IOPS, 214.50 MiB/s [2024-12-09T05:24:10.195Z] 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:18.578 "name": "raid_bdev1", 01:29:18.578 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:18.578 "strip_size_kb": 0, 01:29:18.578 "state": "online", 01:29:18.578 "raid_level": "raid1", 01:29:18.578 "superblock": true, 01:29:18.578 "num_base_bdevs": 4, 01:29:18.578 "num_base_bdevs_discovered": 3, 01:29:18.578 "num_base_bdevs_operational": 3, 01:29:18.578 "base_bdevs_list": [ 01:29:18.578 { 01:29:18.578 "name": "spare", 01:29:18.578 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:18.578 "is_configured": true, 01:29:18.578 "data_offset": 2048, 01:29:18.578 "data_size": 63488 01:29:18.578 }, 01:29:18.578 { 01:29:18.578 "name": null, 01:29:18.578 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:18.578 "is_configured": false, 01:29:18.578 "data_offset": 0, 01:29:18.578 "data_size": 63488 01:29:18.578 }, 01:29:18.578 { 01:29:18.578 "name": "BaseBdev3", 01:29:18.578 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:18.578 "is_configured": true, 01:29:18.578 "data_offset": 2048, 01:29:18.578 "data_size": 63488 01:29:18.578 }, 01:29:18.578 { 01:29:18.578 "name": "BaseBdev4", 01:29:18.578 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:18.578 "is_configured": true, 01:29:18.578 "data_offset": 2048, 01:29:18.578 "data_size": 63488 01:29:18.578 } 01:29:18.578 ] 01:29:18.578 }' 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:18.578 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:19.146 [2024-12-09 05:24:10.611074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:29:19.146 [2024-12-09 05:24:10.611109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:19.146 01:29:19.146 Latency(us) 01:29:19.146 [2024-12-09T05:24:10.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:19.146 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 01:29:19.146 raid_bdev1 : 8.64 68.71 206.14 0.00 0.00 21018.93 275.55 122016.12 01:29:19.146 [2024-12-09T05:24:10.763Z] =================================================================================================================== 01:29:19.146 [2024-12-09T05:24:10.763Z] Total : 68.71 206.14 0.00 0.00 21018.93 275.55 122016.12 01:29:19.146 { 01:29:19.146 "results": [ 01:29:19.146 { 01:29:19.146 "job": "raid_bdev1", 01:29:19.146 "core_mask": "0x1", 01:29:19.146 "workload": "randrw", 01:29:19.146 "percentage": 50, 01:29:19.146 "status": "finished", 01:29:19.146 "queue_depth": 2, 01:29:19.146 "io_size": 3145728, 01:29:19.146 "runtime": 8.644636, 01:29:19.146 "iops": 68.71313031572411, 01:29:19.146 "mibps": 206.13939094717233, 01:29:19.146 "io_failed": 0, 01:29:19.146 "io_timeout": 0, 01:29:19.146 "avg_latency_us": 21018.932108968474, 01:29:19.146 "min_latency_us": 275.5490909090909, 01:29:19.146 "max_latency_us": 122016.11636363636 01:29:19.146 } 01:29:19.146 ], 01:29:19.146 "core_count": 1 01:29:19.146 } 01:29:19.146 [2024-12-09 05:24:10.656466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:19.146 [2024-12-09 05:24:10.656545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:19.146 [2024-12-09 05:24:10.656674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:29:19.146 [2024-12-09 05:24:10.656692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:19.146 05:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 01:29:19.405 /dev/nbd0 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:29:19.664 1+0 records in 01:29:19.664 1+0 records out 01:29:19.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407479 s, 10.1 MB/s 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:19.664 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 01:29:19.924 /dev/nbd1 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:29:19.924 1+0 records in 01:29:19.924 1+0 records out 01:29:19.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060635 s, 6.8 MB/s 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:29:19.924 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:29:20.182 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:29:20.183 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:29:20.183 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:29:20.183 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:29:20.183 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:29:20.183 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:20.441 05:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 01:29:20.441 /dev/nbd1 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:29:20.441 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:29:20.699 1+0 records in 01:29:20.699 1+0 records out 01:29:20.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060585 s, 6.8 MB/s 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:29:20.699 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:29:20.957 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:29:20.958 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.215 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:21.216 [2024-12-09 05:24:12.741096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:29:21.216 [2024-12-09 05:24:12.741584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:21.216 [2024-12-09 05:24:12.741698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 01:29:21.216 [2024-12-09 05:24:12.741723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:21.216 [2024-12-09 05:24:12.744935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:21.216 [2024-12-09 05:24:12.745220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:29:21.216 [2024-12-09 05:24:12.745493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:29:21.216 [2024-12-09 05:24:12.745591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:29:21.216 [2024-12-09 05:24:12.745884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:21.216 spare 01:29:21.216 [2024-12-09 05:24:12.746097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.216 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:21.475 [2024-12-09 05:24:12.846252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:29:21.475 [2024-12-09 05:24:12.846283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 01:29:21.475 [2024-12-09 05:24:12.846732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 01:29:21.475 [2024-12-09 05:24:12.846995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:29:21.475 [2024-12-09 05:24:12.847022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:29:21.475 [2024-12-09 05:24:12.847271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:21.475 "name": "raid_bdev1", 01:29:21.475 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:21.475 "strip_size_kb": 0, 01:29:21.475 "state": "online", 01:29:21.475 "raid_level": "raid1", 01:29:21.475 "superblock": true, 01:29:21.475 "num_base_bdevs": 4, 01:29:21.475 "num_base_bdevs_discovered": 3, 01:29:21.475 "num_base_bdevs_operational": 3, 01:29:21.475 "base_bdevs_list": [ 01:29:21.475 { 01:29:21.475 "name": "spare", 01:29:21.475 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:21.475 "is_configured": true, 01:29:21.475 "data_offset": 2048, 01:29:21.475 "data_size": 63488 01:29:21.475 }, 01:29:21.475 { 01:29:21.475 "name": null, 01:29:21.475 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:21.475 "is_configured": false, 01:29:21.475 "data_offset": 2048, 01:29:21.475 "data_size": 63488 01:29:21.475 }, 01:29:21.475 { 01:29:21.475 "name": "BaseBdev3", 01:29:21.475 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:21.475 "is_configured": true, 01:29:21.475 "data_offset": 2048, 01:29:21.475 "data_size": 63488 01:29:21.475 }, 01:29:21.475 { 01:29:21.475 "name": "BaseBdev4", 01:29:21.475 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:21.475 "is_configured": true, 01:29:21.475 "data_offset": 2048, 01:29:21.475 "data_size": 63488 01:29:21.475 } 01:29:21.475 ] 01:29:21.475 }' 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:21.475 05:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:21.734 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:22.007 "name": "raid_bdev1", 01:29:22.007 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:22.007 "strip_size_kb": 0, 01:29:22.007 "state": "online", 01:29:22.007 "raid_level": "raid1", 01:29:22.007 "superblock": true, 01:29:22.007 "num_base_bdevs": 4, 01:29:22.007 "num_base_bdevs_discovered": 3, 01:29:22.007 "num_base_bdevs_operational": 3, 01:29:22.007 "base_bdevs_list": [ 01:29:22.007 { 01:29:22.007 "name": "spare", 01:29:22.007 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:22.007 "is_configured": true, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 }, 01:29:22.007 { 01:29:22.007 "name": null, 01:29:22.007 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:22.007 "is_configured": false, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 }, 01:29:22.007 { 01:29:22.007 "name": "BaseBdev3", 01:29:22.007 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:22.007 "is_configured": true, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 }, 01:29:22.007 { 01:29:22.007 "name": "BaseBdev4", 01:29:22.007 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:22.007 "is_configured": true, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 } 01:29:22.007 ] 01:29:22.007 }' 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:22.007 [2024-12-09 05:24:13.538091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:22.007 "name": "raid_bdev1", 01:29:22.007 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:22.007 "strip_size_kb": 0, 01:29:22.007 "state": "online", 01:29:22.007 "raid_level": "raid1", 01:29:22.007 "superblock": true, 01:29:22.007 "num_base_bdevs": 4, 01:29:22.007 "num_base_bdevs_discovered": 2, 01:29:22.007 "num_base_bdevs_operational": 2, 01:29:22.007 "base_bdevs_list": [ 01:29:22.007 { 01:29:22.007 "name": null, 01:29:22.007 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:22.007 "is_configured": false, 01:29:22.007 "data_offset": 0, 01:29:22.007 "data_size": 63488 01:29:22.007 }, 01:29:22.007 { 01:29:22.007 "name": null, 01:29:22.007 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:22.007 "is_configured": false, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 }, 01:29:22.007 { 01:29:22.007 "name": "BaseBdev3", 01:29:22.007 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:22.007 "is_configured": true, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 }, 01:29:22.007 { 01:29:22.007 "name": "BaseBdev4", 01:29:22.007 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:22.007 "is_configured": true, 01:29:22.007 "data_offset": 2048, 01:29:22.007 "data_size": 63488 01:29:22.007 } 01:29:22.007 ] 01:29:22.007 }' 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:22.007 05:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:22.573 05:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:29:22.573 05:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.573 05:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:22.573 [2024-12-09 05:24:14.054334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:29:22.573 [2024-12-09 05:24:14.054655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 01:29:22.573 [2024-12-09 05:24:14.054689] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:29:22.573 [2024-12-09 05:24:14.055488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:29:22.573 [2024-12-09 05:24:14.069947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 01:29:22.573 05:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.573 05:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 01:29:22.573 [2024-12-09 05:24:14.072658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:23.508 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:23.766 "name": "raid_bdev1", 01:29:23.766 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:23.766 "strip_size_kb": 0, 01:29:23.766 "state": "online", 01:29:23.766 "raid_level": "raid1", 01:29:23.766 "superblock": true, 01:29:23.766 "num_base_bdevs": 4, 01:29:23.766 "num_base_bdevs_discovered": 3, 01:29:23.766 "num_base_bdevs_operational": 3, 01:29:23.766 "process": { 01:29:23.766 "type": "rebuild", 01:29:23.766 "target": "spare", 01:29:23.766 "progress": { 01:29:23.766 "blocks": 20480, 01:29:23.766 "percent": 32 01:29:23.766 } 01:29:23.766 }, 01:29:23.766 "base_bdevs_list": [ 01:29:23.766 { 01:29:23.766 "name": "spare", 01:29:23.766 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:23.766 "is_configured": true, 01:29:23.766 "data_offset": 2048, 01:29:23.766 "data_size": 63488 01:29:23.766 }, 01:29:23.766 { 01:29:23.766 "name": null, 01:29:23.766 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:23.766 "is_configured": false, 01:29:23.766 "data_offset": 2048, 01:29:23.766 "data_size": 63488 01:29:23.766 }, 01:29:23.766 { 01:29:23.766 "name": "BaseBdev3", 01:29:23.766 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:23.766 "is_configured": true, 01:29:23.766 "data_offset": 2048, 01:29:23.766 "data_size": 63488 01:29:23.766 }, 01:29:23.766 { 01:29:23.766 "name": "BaseBdev4", 01:29:23.766 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:23.766 "is_configured": true, 01:29:23.766 "data_offset": 2048, 01:29:23.766 "data_size": 63488 01:29:23.766 } 01:29:23.766 ] 01:29:23.766 }' 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.766 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:23.766 [2024-12-09 05:24:15.246575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:23.767 [2024-12-09 05:24:15.281771] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:29:23.767 [2024-12-09 05:24:15.282652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:23.767 [2024-12-09 05:24:15.282703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:23.767 [2024-12-09 05:24:15.282720] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:23.767 "name": "raid_bdev1", 01:29:23.767 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:23.767 "strip_size_kb": 0, 01:29:23.767 "state": "online", 01:29:23.767 "raid_level": "raid1", 01:29:23.767 "superblock": true, 01:29:23.767 "num_base_bdevs": 4, 01:29:23.767 "num_base_bdevs_discovered": 2, 01:29:23.767 "num_base_bdevs_operational": 2, 01:29:23.767 "base_bdevs_list": [ 01:29:23.767 { 01:29:23.767 "name": null, 01:29:23.767 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:23.767 "is_configured": false, 01:29:23.767 "data_offset": 0, 01:29:23.767 "data_size": 63488 01:29:23.767 }, 01:29:23.767 { 01:29:23.767 "name": null, 01:29:23.767 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:23.767 "is_configured": false, 01:29:23.767 "data_offset": 2048, 01:29:23.767 "data_size": 63488 01:29:23.767 }, 01:29:23.767 { 01:29:23.767 "name": "BaseBdev3", 01:29:23.767 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:23.767 "is_configured": true, 01:29:23.767 "data_offset": 2048, 01:29:23.767 "data_size": 63488 01:29:23.767 }, 01:29:23.767 { 01:29:23.767 "name": "BaseBdev4", 01:29:23.767 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:23.767 "is_configured": true, 01:29:23.767 "data_offset": 2048, 01:29:23.767 "data_size": 63488 01:29:23.767 } 01:29:23.767 ] 01:29:23.767 }' 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:23.767 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:24.359 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:29:24.360 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:24.360 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:24.360 [2024-12-09 05:24:15.856379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:29:24.360 [2024-12-09 05:24:15.856531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:24.360 [2024-12-09 05:24:15.856591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 01:29:24.360 [2024-12-09 05:24:15.856612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:24.360 [2024-12-09 05:24:15.857472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:24.360 [2024-12-09 05:24:15.857560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:29:24.360 [2024-12-09 05:24:15.857758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:29:24.360 [2024-12-09 05:24:15.857782] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 01:29:24.360 [2024-12-09 05:24:15.857805] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:29:24.360 [2024-12-09 05:24:15.857840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:29:24.360 [2024-12-09 05:24:15.873407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 01:29:24.360 spare 01:29:24.360 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:24.360 05:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 01:29:24.360 [2024-12-09 05:24:15.876349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:25.296 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.555 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:25.555 "name": "raid_bdev1", 01:29:25.555 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:25.555 "strip_size_kb": 0, 01:29:25.555 "state": "online", 01:29:25.555 "raid_level": "raid1", 01:29:25.555 "superblock": true, 01:29:25.555 "num_base_bdevs": 4, 01:29:25.555 "num_base_bdevs_discovered": 3, 01:29:25.555 "num_base_bdevs_operational": 3, 01:29:25.555 "process": { 01:29:25.555 "type": "rebuild", 01:29:25.555 "target": "spare", 01:29:25.555 "progress": { 01:29:25.555 "blocks": 20480, 01:29:25.555 "percent": 32 01:29:25.555 } 01:29:25.555 }, 01:29:25.555 "base_bdevs_list": [ 01:29:25.555 { 01:29:25.555 "name": "spare", 01:29:25.555 "uuid": "36f64a95-20de-5d7f-b7a8-b3aa899e382d", 01:29:25.555 "is_configured": true, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 }, 01:29:25.555 { 01:29:25.555 "name": null, 01:29:25.555 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:25.555 "is_configured": false, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 }, 01:29:25.555 { 01:29:25.555 "name": "BaseBdev3", 01:29:25.555 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:25.555 "is_configured": true, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 }, 01:29:25.555 { 01:29:25.555 "name": "BaseBdev4", 01:29:25.555 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:25.555 "is_configured": true, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 } 01:29:25.555 ] 01:29:25.555 }' 01:29:25.555 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:25.555 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:29:25.555 05:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:25.555 [2024-12-09 05:24:17.050933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:25.555 [2024-12-09 05:24:17.086801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:29:25.555 [2024-12-09 05:24:17.086949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:25.555 [2024-12-09 05:24:17.086982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:29:25.555 [2024-12-09 05:24:17.087002] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:25.555 "name": "raid_bdev1", 01:29:25.555 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:25.555 "strip_size_kb": 0, 01:29:25.555 "state": "online", 01:29:25.555 "raid_level": "raid1", 01:29:25.555 "superblock": true, 01:29:25.555 "num_base_bdevs": 4, 01:29:25.555 "num_base_bdevs_discovered": 2, 01:29:25.555 "num_base_bdevs_operational": 2, 01:29:25.555 "base_bdevs_list": [ 01:29:25.555 { 01:29:25.555 "name": null, 01:29:25.555 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:25.555 "is_configured": false, 01:29:25.555 "data_offset": 0, 01:29:25.555 "data_size": 63488 01:29:25.555 }, 01:29:25.555 { 01:29:25.555 "name": null, 01:29:25.555 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:25.555 "is_configured": false, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 }, 01:29:25.555 { 01:29:25.555 "name": "BaseBdev3", 01:29:25.555 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:25.555 "is_configured": true, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 }, 01:29:25.555 { 01:29:25.555 "name": "BaseBdev4", 01:29:25.555 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:25.555 "is_configured": true, 01:29:25.555 "data_offset": 2048, 01:29:25.555 "data_size": 63488 01:29:25.555 } 01:29:25.555 ] 01:29:25.555 }' 01:29:25.555 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:25.814 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:26.072 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:26.330 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:26.330 "name": "raid_bdev1", 01:29:26.330 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:26.330 "strip_size_kb": 0, 01:29:26.330 "state": "online", 01:29:26.330 "raid_level": "raid1", 01:29:26.330 "superblock": true, 01:29:26.330 "num_base_bdevs": 4, 01:29:26.330 "num_base_bdevs_discovered": 2, 01:29:26.330 "num_base_bdevs_operational": 2, 01:29:26.330 "base_bdevs_list": [ 01:29:26.330 { 01:29:26.330 "name": null, 01:29:26.330 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:26.330 "is_configured": false, 01:29:26.330 "data_offset": 0, 01:29:26.330 "data_size": 63488 01:29:26.330 }, 01:29:26.330 { 01:29:26.330 "name": null, 01:29:26.330 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:26.330 "is_configured": false, 01:29:26.330 "data_offset": 2048, 01:29:26.330 "data_size": 63488 01:29:26.330 }, 01:29:26.330 { 01:29:26.330 "name": "BaseBdev3", 01:29:26.330 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:26.330 "is_configured": true, 01:29:26.330 "data_offset": 2048, 01:29:26.330 "data_size": 63488 01:29:26.330 }, 01:29:26.330 { 01:29:26.330 "name": "BaseBdev4", 01:29:26.330 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:26.330 "is_configured": true, 01:29:26.331 "data_offset": 2048, 01:29:26.331 "data_size": 63488 01:29:26.331 } 01:29:26.331 ] 01:29:26.331 }' 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:26.331 [2024-12-09 05:24:17.846380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:29:26.331 [2024-12-09 05:24:17.846530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:26.331 [2024-12-09 05:24:17.846563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 01:29:26.331 [2024-12-09 05:24:17.846584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:26.331 [2024-12-09 05:24:17.847382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:26.331 [2024-12-09 05:24:17.847445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:29:26.331 [2024-12-09 05:24:17.847570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:29:26.331 [2024-12-09 05:24:17.847605] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 01:29:26.331 [2024-12-09 05:24:17.847622] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:29:26.331 [2024-12-09 05:24:17.847640] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:29:26.331 BaseBdev1 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:26.331 05:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.265 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:27.523 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:27.523 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:27.523 "name": "raid_bdev1", 01:29:27.523 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:27.523 "strip_size_kb": 0, 01:29:27.523 "state": "online", 01:29:27.523 "raid_level": "raid1", 01:29:27.523 "superblock": true, 01:29:27.523 "num_base_bdevs": 4, 01:29:27.523 "num_base_bdevs_discovered": 2, 01:29:27.523 "num_base_bdevs_operational": 2, 01:29:27.523 "base_bdevs_list": [ 01:29:27.523 { 01:29:27.523 "name": null, 01:29:27.523 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:27.523 "is_configured": false, 01:29:27.523 "data_offset": 0, 01:29:27.523 "data_size": 63488 01:29:27.523 }, 01:29:27.523 { 01:29:27.523 "name": null, 01:29:27.523 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:27.523 "is_configured": false, 01:29:27.523 "data_offset": 2048, 01:29:27.523 "data_size": 63488 01:29:27.523 }, 01:29:27.523 { 01:29:27.523 "name": "BaseBdev3", 01:29:27.523 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:27.523 "is_configured": true, 01:29:27.523 "data_offset": 2048, 01:29:27.523 "data_size": 63488 01:29:27.523 }, 01:29:27.523 { 01:29:27.523 "name": "BaseBdev4", 01:29:27.523 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:27.523 "is_configured": true, 01:29:27.523 "data_offset": 2048, 01:29:27.523 "data_size": 63488 01:29:27.523 } 01:29:27.523 ] 01:29:27.523 }' 01:29:27.523 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:27.523 05:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.781 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:28.040 "name": "raid_bdev1", 01:29:28.040 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:28.040 "strip_size_kb": 0, 01:29:28.040 "state": "online", 01:29:28.040 "raid_level": "raid1", 01:29:28.040 "superblock": true, 01:29:28.040 "num_base_bdevs": 4, 01:29:28.040 "num_base_bdevs_discovered": 2, 01:29:28.040 "num_base_bdevs_operational": 2, 01:29:28.040 "base_bdevs_list": [ 01:29:28.040 { 01:29:28.040 "name": null, 01:29:28.040 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:28.040 "is_configured": false, 01:29:28.040 "data_offset": 0, 01:29:28.040 "data_size": 63488 01:29:28.040 }, 01:29:28.040 { 01:29:28.040 "name": null, 01:29:28.040 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:28.040 "is_configured": false, 01:29:28.040 "data_offset": 2048, 01:29:28.040 "data_size": 63488 01:29:28.040 }, 01:29:28.040 { 01:29:28.040 "name": "BaseBdev3", 01:29:28.040 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:28.040 "is_configured": true, 01:29:28.040 "data_offset": 2048, 01:29:28.040 "data_size": 63488 01:29:28.040 }, 01:29:28.040 { 01:29:28.040 "name": "BaseBdev4", 01:29:28.040 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:28.040 "is_configured": true, 01:29:28.040 "data_offset": 2048, 01:29:28.040 "data_size": 63488 01:29:28.040 } 01:29:28.040 ] 01:29:28.040 }' 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:28.040 [2024-12-09 05:24:19.559329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:28.040 [2024-12-09 05:24:19.559666] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 01:29:28.040 [2024-12-09 05:24:19.559687] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:29:28.040 request: 01:29:28.040 { 01:29:28.040 "base_bdev": "BaseBdev1", 01:29:28.040 "raid_bdev": "raid_bdev1", 01:29:28.040 "method": "bdev_raid_add_base_bdev", 01:29:28.040 "req_id": 1 01:29:28.040 } 01:29:28.040 Got JSON-RPC error response 01:29:28.040 response: 01:29:28.040 { 01:29:28.040 "code": -22, 01:29:28.040 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:29:28.040 } 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:28.040 05:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:28.975 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:29.235 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.235 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:29.235 "name": "raid_bdev1", 01:29:29.235 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:29.235 "strip_size_kb": 0, 01:29:29.235 "state": "online", 01:29:29.235 "raid_level": "raid1", 01:29:29.235 "superblock": true, 01:29:29.235 "num_base_bdevs": 4, 01:29:29.235 "num_base_bdevs_discovered": 2, 01:29:29.235 "num_base_bdevs_operational": 2, 01:29:29.235 "base_bdevs_list": [ 01:29:29.235 { 01:29:29.235 "name": null, 01:29:29.235 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:29.235 "is_configured": false, 01:29:29.235 "data_offset": 0, 01:29:29.235 "data_size": 63488 01:29:29.235 }, 01:29:29.235 { 01:29:29.235 "name": null, 01:29:29.235 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:29.235 "is_configured": false, 01:29:29.235 "data_offset": 2048, 01:29:29.235 "data_size": 63488 01:29:29.235 }, 01:29:29.235 { 01:29:29.235 "name": "BaseBdev3", 01:29:29.235 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:29.235 "is_configured": true, 01:29:29.235 "data_offset": 2048, 01:29:29.235 "data_size": 63488 01:29:29.235 }, 01:29:29.235 { 01:29:29.235 "name": "BaseBdev4", 01:29:29.235 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:29.235 "is_configured": true, 01:29:29.235 "data_offset": 2048, 01:29:29.235 "data_size": 63488 01:29:29.235 } 01:29:29.235 ] 01:29:29.235 }' 01:29:29.235 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:29.235 05:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:29.493 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:29.752 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.752 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:29:29.752 "name": "raid_bdev1", 01:29:29.752 "uuid": "232810cc-d837-47e8-b583-b83988496e58", 01:29:29.752 "strip_size_kb": 0, 01:29:29.752 "state": "online", 01:29:29.752 "raid_level": "raid1", 01:29:29.752 "superblock": true, 01:29:29.752 "num_base_bdevs": 4, 01:29:29.752 "num_base_bdevs_discovered": 2, 01:29:29.752 "num_base_bdevs_operational": 2, 01:29:29.752 "base_bdevs_list": [ 01:29:29.752 { 01:29:29.752 "name": null, 01:29:29.752 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:29.752 "is_configured": false, 01:29:29.752 "data_offset": 0, 01:29:29.752 "data_size": 63488 01:29:29.752 }, 01:29:29.752 { 01:29:29.752 "name": null, 01:29:29.752 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:29.752 "is_configured": false, 01:29:29.752 "data_offset": 2048, 01:29:29.752 "data_size": 63488 01:29:29.752 }, 01:29:29.752 { 01:29:29.752 "name": "BaseBdev3", 01:29:29.752 "uuid": "1bfcbd86-832f-5504-9b27-3f92cc392433", 01:29:29.752 "is_configured": true, 01:29:29.752 "data_offset": 2048, 01:29:29.752 "data_size": 63488 01:29:29.752 }, 01:29:29.752 { 01:29:29.752 "name": "BaseBdev4", 01:29:29.752 "uuid": "4f9072ae-88f1-5a77-8000-8da3e2f23d88", 01:29:29.752 "is_configured": true, 01:29:29.752 "data_offset": 2048, 01:29:29.752 "data_size": 63488 01:29:29.752 } 01:29:29.752 ] 01:29:29.752 }' 01:29:29.752 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:29:29.752 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:29:29.752 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79419 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79419 ']' 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79419 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79419 01:29:29.753 killing process with pid 79419 01:29:29.753 Received shutdown signal, test time was about 19.289408 seconds 01:29:29.753 01:29:29.753 Latency(us) 01:29:29.753 [2024-12-09T05:24:21.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:29.753 [2024-12-09T05:24:21.370Z] =================================================================================================================== 01:29:29.753 [2024-12-09T05:24:21.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79419' 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79419 01:29:29.753 05:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79419 01:29:29.753 [2024-12-09 05:24:21.283743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:29:29.753 [2024-12-09 05:24:21.283947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:29.753 [2024-12-09 05:24:21.284102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:29:29.753 [2024-12-09 05:24:21.284129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:29:30.329 [2024-12-09 05:24:21.643641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:29:31.704 05:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 01:29:31.704 01:29:31.704 real 0m23.046s 01:29:31.704 user 0m31.370s 01:29:31.704 sys 0m2.311s 01:29:31.704 05:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:31.704 05:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 01:29:31.704 ************************************ 01:29:31.704 END TEST raid_rebuild_test_sb_io 01:29:31.704 ************************************ 01:29:31.704 05:24:22 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 01:29:31.704 05:24:22 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 01:29:31.704 05:24:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:29:31.704 05:24:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:31.704 05:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:29:31.704 ************************************ 01:29:31.704 START TEST raid5f_state_function_test 01:29:31.704 ************************************ 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80160 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80160' 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:29:31.704 Process raid pid: 80160 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80160 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80160 ']' 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:31.704 05:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:31.704 [2024-12-09 05:24:23.032905] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:31.705 [2024-12-09 05:24:23.033042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:29:31.705 [2024-12-09 05:24:23.212657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:31.982 [2024-12-09 05:24:23.355106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:31.982 [2024-12-09 05:24:23.567152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:31.982 [2024-12-09 05:24:23.567209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:32.559 [2024-12-09 05:24:24.158411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:29:32.559 [2024-12-09 05:24:24.158509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:29:32.559 [2024-12-09 05:24:24.158539] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:29:32.559 [2024-12-09 05:24:24.158567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:29:32.559 [2024-12-09 05:24:24.158597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:29:32.559 [2024-12-09 05:24:24.158627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:32.559 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:32.817 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:32.817 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:32.817 "name": "Existed_Raid", 01:29:32.817 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:32.817 "strip_size_kb": 64, 01:29:32.817 "state": "configuring", 01:29:32.817 "raid_level": "raid5f", 01:29:32.817 "superblock": false, 01:29:32.817 "num_base_bdevs": 3, 01:29:32.817 "num_base_bdevs_discovered": 0, 01:29:32.817 "num_base_bdevs_operational": 3, 01:29:32.817 "base_bdevs_list": [ 01:29:32.817 { 01:29:32.817 "name": "BaseBdev1", 01:29:32.817 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:32.817 "is_configured": false, 01:29:32.817 "data_offset": 0, 01:29:32.817 "data_size": 0 01:29:32.817 }, 01:29:32.817 { 01:29:32.817 "name": "BaseBdev2", 01:29:32.817 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:32.817 "is_configured": false, 01:29:32.817 "data_offset": 0, 01:29:32.817 "data_size": 0 01:29:32.817 }, 01:29:32.817 { 01:29:32.817 "name": "BaseBdev3", 01:29:32.817 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:32.817 "is_configured": false, 01:29:32.817 "data_offset": 0, 01:29:32.817 "data_size": 0 01:29:32.817 } 01:29:32.817 ] 01:29:32.817 }' 01:29:32.817 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:32.817 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 [2024-12-09 05:24:24.706546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:29:33.383 [2024-12-09 05:24:24.706593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 [2024-12-09 05:24:24.714516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:29:33.383 [2024-12-09 05:24:24.714575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:29:33.383 [2024-12-09 05:24:24.714591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:29:33.383 [2024-12-09 05:24:24.714616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:29:33.383 [2024-12-09 05:24:24.714626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:29:33.383 [2024-12-09 05:24:24.714640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 [2024-12-09 05:24:24.761563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:33.383 BaseBdev1 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 [ 01:29:33.383 { 01:29:33.383 "name": "BaseBdev1", 01:29:33.383 "aliases": [ 01:29:33.383 "258c7f0b-c95f-4d66-931e-6aa05a7f2c85" 01:29:33.383 ], 01:29:33.383 "product_name": "Malloc disk", 01:29:33.383 "block_size": 512, 01:29:33.383 "num_blocks": 65536, 01:29:33.383 "uuid": "258c7f0b-c95f-4d66-931e-6aa05a7f2c85", 01:29:33.383 "assigned_rate_limits": { 01:29:33.383 "rw_ios_per_sec": 0, 01:29:33.383 "rw_mbytes_per_sec": 0, 01:29:33.383 "r_mbytes_per_sec": 0, 01:29:33.383 "w_mbytes_per_sec": 0 01:29:33.383 }, 01:29:33.383 "claimed": true, 01:29:33.383 "claim_type": "exclusive_write", 01:29:33.383 "zoned": false, 01:29:33.383 "supported_io_types": { 01:29:33.383 "read": true, 01:29:33.383 "write": true, 01:29:33.383 "unmap": true, 01:29:33.383 "flush": true, 01:29:33.383 "reset": true, 01:29:33.383 "nvme_admin": false, 01:29:33.383 "nvme_io": false, 01:29:33.383 "nvme_io_md": false, 01:29:33.383 "write_zeroes": true, 01:29:33.383 "zcopy": true, 01:29:33.383 "get_zone_info": false, 01:29:33.383 "zone_management": false, 01:29:33.383 "zone_append": false, 01:29:33.383 "compare": false, 01:29:33.383 "compare_and_write": false, 01:29:33.383 "abort": true, 01:29:33.383 "seek_hole": false, 01:29:33.383 "seek_data": false, 01:29:33.383 "copy": true, 01:29:33.383 "nvme_iov_md": false 01:29:33.383 }, 01:29:33.383 "memory_domains": [ 01:29:33.383 { 01:29:33.383 "dma_device_id": "system", 01:29:33.383 "dma_device_type": 1 01:29:33.383 }, 01:29:33.383 { 01:29:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:33.383 "dma_device_type": 2 01:29:33.383 } 01:29:33.383 ], 01:29:33.383 "driver_specific": {} 01:29:33.383 } 01:29:33.383 ] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.383 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:33.383 "name": "Existed_Raid", 01:29:33.383 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:33.383 "strip_size_kb": 64, 01:29:33.383 "state": "configuring", 01:29:33.383 "raid_level": "raid5f", 01:29:33.383 "superblock": false, 01:29:33.383 "num_base_bdevs": 3, 01:29:33.383 "num_base_bdevs_discovered": 1, 01:29:33.383 "num_base_bdevs_operational": 3, 01:29:33.383 "base_bdevs_list": [ 01:29:33.383 { 01:29:33.383 "name": "BaseBdev1", 01:29:33.383 "uuid": "258c7f0b-c95f-4d66-931e-6aa05a7f2c85", 01:29:33.384 "is_configured": true, 01:29:33.384 "data_offset": 0, 01:29:33.384 "data_size": 65536 01:29:33.384 }, 01:29:33.384 { 01:29:33.384 "name": "BaseBdev2", 01:29:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:33.384 "is_configured": false, 01:29:33.384 "data_offset": 0, 01:29:33.384 "data_size": 0 01:29:33.384 }, 01:29:33.384 { 01:29:33.384 "name": "BaseBdev3", 01:29:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:33.384 "is_configured": false, 01:29:33.384 "data_offset": 0, 01:29:33.384 "data_size": 0 01:29:33.384 } 01:29:33.384 ] 01:29:33.384 }' 01:29:33.384 05:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:33.384 05:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.951 [2024-12-09 05:24:25.321771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:29:33.951 [2024-12-09 05:24:25.321836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.951 [2024-12-09 05:24:25.333816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:33.951 [2024-12-09 05:24:25.336674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:29:33.951 [2024-12-09 05:24:25.336950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:29:33.951 [2024-12-09 05:24:25.337083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:29:33.951 [2024-12-09 05:24:25.337222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:33.951 "name": "Existed_Raid", 01:29:33.951 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:33.951 "strip_size_kb": 64, 01:29:33.951 "state": "configuring", 01:29:33.951 "raid_level": "raid5f", 01:29:33.951 "superblock": false, 01:29:33.951 "num_base_bdevs": 3, 01:29:33.951 "num_base_bdevs_discovered": 1, 01:29:33.951 "num_base_bdevs_operational": 3, 01:29:33.951 "base_bdevs_list": [ 01:29:33.951 { 01:29:33.951 "name": "BaseBdev1", 01:29:33.951 "uuid": "258c7f0b-c95f-4d66-931e-6aa05a7f2c85", 01:29:33.951 "is_configured": true, 01:29:33.951 "data_offset": 0, 01:29:33.951 "data_size": 65536 01:29:33.951 }, 01:29:33.951 { 01:29:33.951 "name": "BaseBdev2", 01:29:33.951 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:33.951 "is_configured": false, 01:29:33.951 "data_offset": 0, 01:29:33.951 "data_size": 0 01:29:33.951 }, 01:29:33.951 { 01:29:33.951 "name": "BaseBdev3", 01:29:33.951 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:33.951 "is_configured": false, 01:29:33.951 "data_offset": 0, 01:29:33.951 "data_size": 0 01:29:33.951 } 01:29:33.951 ] 01:29:33.951 }' 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:33.951 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:34.518 [2024-12-09 05:24:25.887296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:34.518 BaseBdev2 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:34.518 [ 01:29:34.518 { 01:29:34.518 "name": "BaseBdev2", 01:29:34.518 "aliases": [ 01:29:34.518 "ae8a6fc6-10f1-4f9e-8d61-434f86aa7854" 01:29:34.518 ], 01:29:34.518 "product_name": "Malloc disk", 01:29:34.518 "block_size": 512, 01:29:34.518 "num_blocks": 65536, 01:29:34.518 "uuid": "ae8a6fc6-10f1-4f9e-8d61-434f86aa7854", 01:29:34.518 "assigned_rate_limits": { 01:29:34.518 "rw_ios_per_sec": 0, 01:29:34.518 "rw_mbytes_per_sec": 0, 01:29:34.518 "r_mbytes_per_sec": 0, 01:29:34.518 "w_mbytes_per_sec": 0 01:29:34.518 }, 01:29:34.518 "claimed": true, 01:29:34.518 "claim_type": "exclusive_write", 01:29:34.518 "zoned": false, 01:29:34.518 "supported_io_types": { 01:29:34.518 "read": true, 01:29:34.518 "write": true, 01:29:34.518 "unmap": true, 01:29:34.518 "flush": true, 01:29:34.518 "reset": true, 01:29:34.518 "nvme_admin": false, 01:29:34.518 "nvme_io": false, 01:29:34.518 "nvme_io_md": false, 01:29:34.518 "write_zeroes": true, 01:29:34.518 "zcopy": true, 01:29:34.518 "get_zone_info": false, 01:29:34.518 "zone_management": false, 01:29:34.518 "zone_append": false, 01:29:34.518 "compare": false, 01:29:34.518 "compare_and_write": false, 01:29:34.518 "abort": true, 01:29:34.518 "seek_hole": false, 01:29:34.518 "seek_data": false, 01:29:34.518 "copy": true, 01:29:34.518 "nvme_iov_md": false 01:29:34.518 }, 01:29:34.518 "memory_domains": [ 01:29:34.518 { 01:29:34.518 "dma_device_id": "system", 01:29:34.518 "dma_device_type": 1 01:29:34.518 }, 01:29:34.518 { 01:29:34.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:34.518 "dma_device_type": 2 01:29:34.518 } 01:29:34.518 ], 01:29:34.518 "driver_specific": {} 01:29:34.518 } 01:29:34.518 ] 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:34.518 "name": "Existed_Raid", 01:29:34.518 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:34.518 "strip_size_kb": 64, 01:29:34.518 "state": "configuring", 01:29:34.518 "raid_level": "raid5f", 01:29:34.518 "superblock": false, 01:29:34.518 "num_base_bdevs": 3, 01:29:34.518 "num_base_bdevs_discovered": 2, 01:29:34.518 "num_base_bdevs_operational": 3, 01:29:34.518 "base_bdevs_list": [ 01:29:34.518 { 01:29:34.518 "name": "BaseBdev1", 01:29:34.518 "uuid": "258c7f0b-c95f-4d66-931e-6aa05a7f2c85", 01:29:34.518 "is_configured": true, 01:29:34.518 "data_offset": 0, 01:29:34.518 "data_size": 65536 01:29:34.518 }, 01:29:34.518 { 01:29:34.518 "name": "BaseBdev2", 01:29:34.518 "uuid": "ae8a6fc6-10f1-4f9e-8d61-434f86aa7854", 01:29:34.518 "is_configured": true, 01:29:34.518 "data_offset": 0, 01:29:34.518 "data_size": 65536 01:29:34.518 }, 01:29:34.518 { 01:29:34.518 "name": "BaseBdev3", 01:29:34.518 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:34.518 "is_configured": false, 01:29:34.518 "data_offset": 0, 01:29:34.518 "data_size": 0 01:29:34.518 } 01:29:34.518 ] 01:29:34.518 }' 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:34.518 05:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.086 [2024-12-09 05:24:26.510329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:35.086 [2024-12-09 05:24:26.510494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:29:35.086 [2024-12-09 05:24:26.510537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:29:35.086 [2024-12-09 05:24:26.510992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:29:35.086 [2024-12-09 05:24:26.516563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:29:35.086 [2024-12-09 05:24:26.516593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:29:35.086 [2024-12-09 05:24:26.516960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:35.086 BaseBdev3 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:35.086 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.087 [ 01:29:35.087 { 01:29:35.087 "name": "BaseBdev3", 01:29:35.087 "aliases": [ 01:29:35.087 "533e5fc1-8388-4e0d-8e67-718594ab5032" 01:29:35.087 ], 01:29:35.087 "product_name": "Malloc disk", 01:29:35.087 "block_size": 512, 01:29:35.087 "num_blocks": 65536, 01:29:35.087 "uuid": "533e5fc1-8388-4e0d-8e67-718594ab5032", 01:29:35.087 "assigned_rate_limits": { 01:29:35.087 "rw_ios_per_sec": 0, 01:29:35.087 "rw_mbytes_per_sec": 0, 01:29:35.087 "r_mbytes_per_sec": 0, 01:29:35.087 "w_mbytes_per_sec": 0 01:29:35.087 }, 01:29:35.087 "claimed": true, 01:29:35.087 "claim_type": "exclusive_write", 01:29:35.087 "zoned": false, 01:29:35.087 "supported_io_types": { 01:29:35.087 "read": true, 01:29:35.087 "write": true, 01:29:35.087 "unmap": true, 01:29:35.087 "flush": true, 01:29:35.087 "reset": true, 01:29:35.087 "nvme_admin": false, 01:29:35.087 "nvme_io": false, 01:29:35.087 "nvme_io_md": false, 01:29:35.087 "write_zeroes": true, 01:29:35.087 "zcopy": true, 01:29:35.087 "get_zone_info": false, 01:29:35.087 "zone_management": false, 01:29:35.087 "zone_append": false, 01:29:35.087 "compare": false, 01:29:35.087 "compare_and_write": false, 01:29:35.087 "abort": true, 01:29:35.087 "seek_hole": false, 01:29:35.087 "seek_data": false, 01:29:35.087 "copy": true, 01:29:35.087 "nvme_iov_md": false 01:29:35.087 }, 01:29:35.087 "memory_domains": [ 01:29:35.087 { 01:29:35.087 "dma_device_id": "system", 01:29:35.087 "dma_device_type": 1 01:29:35.087 }, 01:29:35.087 { 01:29:35.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:35.087 "dma_device_type": 2 01:29:35.087 } 01:29:35.087 ], 01:29:35.087 "driver_specific": {} 01:29:35.087 } 01:29:35.087 ] 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:35.087 "name": "Existed_Raid", 01:29:35.087 "uuid": "08733a60-1edf-48af-9d18-977ba9f34aaa", 01:29:35.087 "strip_size_kb": 64, 01:29:35.087 "state": "online", 01:29:35.087 "raid_level": "raid5f", 01:29:35.087 "superblock": false, 01:29:35.087 "num_base_bdevs": 3, 01:29:35.087 "num_base_bdevs_discovered": 3, 01:29:35.087 "num_base_bdevs_operational": 3, 01:29:35.087 "base_bdevs_list": [ 01:29:35.087 { 01:29:35.087 "name": "BaseBdev1", 01:29:35.087 "uuid": "258c7f0b-c95f-4d66-931e-6aa05a7f2c85", 01:29:35.087 "is_configured": true, 01:29:35.087 "data_offset": 0, 01:29:35.087 "data_size": 65536 01:29:35.087 }, 01:29:35.087 { 01:29:35.087 "name": "BaseBdev2", 01:29:35.087 "uuid": "ae8a6fc6-10f1-4f9e-8d61-434f86aa7854", 01:29:35.087 "is_configured": true, 01:29:35.087 "data_offset": 0, 01:29:35.087 "data_size": 65536 01:29:35.087 }, 01:29:35.087 { 01:29:35.087 "name": "BaseBdev3", 01:29:35.087 "uuid": "533e5fc1-8388-4e0d-8e67-718594ab5032", 01:29:35.087 "is_configured": true, 01:29:35.087 "data_offset": 0, 01:29:35.087 "data_size": 65536 01:29:35.087 } 01:29:35.087 ] 01:29:35.087 }' 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:35.087 05:24:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:29:35.656 [2024-12-09 05:24:27.087203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:29:35.656 "name": "Existed_Raid", 01:29:35.656 "aliases": [ 01:29:35.656 "08733a60-1edf-48af-9d18-977ba9f34aaa" 01:29:35.656 ], 01:29:35.656 "product_name": "Raid Volume", 01:29:35.656 "block_size": 512, 01:29:35.656 "num_blocks": 131072, 01:29:35.656 "uuid": "08733a60-1edf-48af-9d18-977ba9f34aaa", 01:29:35.656 "assigned_rate_limits": { 01:29:35.656 "rw_ios_per_sec": 0, 01:29:35.656 "rw_mbytes_per_sec": 0, 01:29:35.656 "r_mbytes_per_sec": 0, 01:29:35.656 "w_mbytes_per_sec": 0 01:29:35.656 }, 01:29:35.656 "claimed": false, 01:29:35.656 "zoned": false, 01:29:35.656 "supported_io_types": { 01:29:35.656 "read": true, 01:29:35.656 "write": true, 01:29:35.656 "unmap": false, 01:29:35.656 "flush": false, 01:29:35.656 "reset": true, 01:29:35.656 "nvme_admin": false, 01:29:35.656 "nvme_io": false, 01:29:35.656 "nvme_io_md": false, 01:29:35.656 "write_zeroes": true, 01:29:35.656 "zcopy": false, 01:29:35.656 "get_zone_info": false, 01:29:35.656 "zone_management": false, 01:29:35.656 "zone_append": false, 01:29:35.656 "compare": false, 01:29:35.656 "compare_and_write": false, 01:29:35.656 "abort": false, 01:29:35.656 "seek_hole": false, 01:29:35.656 "seek_data": false, 01:29:35.656 "copy": false, 01:29:35.656 "nvme_iov_md": false 01:29:35.656 }, 01:29:35.656 "driver_specific": { 01:29:35.656 "raid": { 01:29:35.656 "uuid": "08733a60-1edf-48af-9d18-977ba9f34aaa", 01:29:35.656 "strip_size_kb": 64, 01:29:35.656 "state": "online", 01:29:35.656 "raid_level": "raid5f", 01:29:35.656 "superblock": false, 01:29:35.656 "num_base_bdevs": 3, 01:29:35.656 "num_base_bdevs_discovered": 3, 01:29:35.656 "num_base_bdevs_operational": 3, 01:29:35.656 "base_bdevs_list": [ 01:29:35.656 { 01:29:35.656 "name": "BaseBdev1", 01:29:35.656 "uuid": "258c7f0b-c95f-4d66-931e-6aa05a7f2c85", 01:29:35.656 "is_configured": true, 01:29:35.656 "data_offset": 0, 01:29:35.656 "data_size": 65536 01:29:35.656 }, 01:29:35.656 { 01:29:35.656 "name": "BaseBdev2", 01:29:35.656 "uuid": "ae8a6fc6-10f1-4f9e-8d61-434f86aa7854", 01:29:35.656 "is_configured": true, 01:29:35.656 "data_offset": 0, 01:29:35.656 "data_size": 65536 01:29:35.656 }, 01:29:35.656 { 01:29:35.656 "name": "BaseBdev3", 01:29:35.656 "uuid": "533e5fc1-8388-4e0d-8e67-718594ab5032", 01:29:35.656 "is_configured": true, 01:29:35.656 "data_offset": 0, 01:29:35.656 "data_size": 65536 01:29:35.656 } 01:29:35.656 ] 01:29:35.656 } 01:29:35.656 } 01:29:35.656 }' 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:29:35.656 BaseBdev2 01:29:35.656 BaseBdev3' 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.656 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:35.915 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:35.915 [2024-12-09 05:24:27.443097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:36.174 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:36.174 "name": "Existed_Raid", 01:29:36.174 "uuid": "08733a60-1edf-48af-9d18-977ba9f34aaa", 01:29:36.174 "strip_size_kb": 64, 01:29:36.174 "state": "online", 01:29:36.174 "raid_level": "raid5f", 01:29:36.174 "superblock": false, 01:29:36.174 "num_base_bdevs": 3, 01:29:36.174 "num_base_bdevs_discovered": 2, 01:29:36.174 "num_base_bdevs_operational": 2, 01:29:36.174 "base_bdevs_list": [ 01:29:36.174 { 01:29:36.174 "name": null, 01:29:36.174 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:36.174 "is_configured": false, 01:29:36.174 "data_offset": 0, 01:29:36.174 "data_size": 65536 01:29:36.174 }, 01:29:36.174 { 01:29:36.174 "name": "BaseBdev2", 01:29:36.174 "uuid": "ae8a6fc6-10f1-4f9e-8d61-434f86aa7854", 01:29:36.174 "is_configured": true, 01:29:36.174 "data_offset": 0, 01:29:36.174 "data_size": 65536 01:29:36.174 }, 01:29:36.175 { 01:29:36.175 "name": "BaseBdev3", 01:29:36.175 "uuid": "533e5fc1-8388-4e0d-8e67-718594ab5032", 01:29:36.175 "is_configured": true, 01:29:36.175 "data_offset": 0, 01:29:36.175 "data_size": 65536 01:29:36.175 } 01:29:36.175 ] 01:29:36.175 }' 01:29:36.175 05:24:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:36.175 05:24:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:36.741 [2024-12-09 05:24:28.145873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:29:36.741 [2024-12-09 05:24:28.146001] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:36.741 [2024-12-09 05:24:28.232474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:29:36.741 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:29:36.742 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:29:36.742 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:36.742 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:36.742 [2024-12-09 05:24:28.296562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:29:36.742 [2024-12-09 05:24:28.296788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 BaseBdev2 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 [ 01:29:37.001 { 01:29:37.001 "name": "BaseBdev2", 01:29:37.001 "aliases": [ 01:29:37.001 "94f53639-bce4-4b41-8e1b-65e36e8f4e2f" 01:29:37.001 ], 01:29:37.001 "product_name": "Malloc disk", 01:29:37.001 "block_size": 512, 01:29:37.001 "num_blocks": 65536, 01:29:37.001 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:37.001 "assigned_rate_limits": { 01:29:37.001 "rw_ios_per_sec": 0, 01:29:37.001 "rw_mbytes_per_sec": 0, 01:29:37.001 "r_mbytes_per_sec": 0, 01:29:37.001 "w_mbytes_per_sec": 0 01:29:37.001 }, 01:29:37.001 "claimed": false, 01:29:37.001 "zoned": false, 01:29:37.001 "supported_io_types": { 01:29:37.001 "read": true, 01:29:37.001 "write": true, 01:29:37.001 "unmap": true, 01:29:37.001 "flush": true, 01:29:37.001 "reset": true, 01:29:37.001 "nvme_admin": false, 01:29:37.001 "nvme_io": false, 01:29:37.001 "nvme_io_md": false, 01:29:37.001 "write_zeroes": true, 01:29:37.001 "zcopy": true, 01:29:37.001 "get_zone_info": false, 01:29:37.001 "zone_management": false, 01:29:37.001 "zone_append": false, 01:29:37.001 "compare": false, 01:29:37.001 "compare_and_write": false, 01:29:37.001 "abort": true, 01:29:37.001 "seek_hole": false, 01:29:37.001 "seek_data": false, 01:29:37.001 "copy": true, 01:29:37.001 "nvme_iov_md": false 01:29:37.001 }, 01:29:37.001 "memory_domains": [ 01:29:37.001 { 01:29:37.001 "dma_device_id": "system", 01:29:37.001 "dma_device_type": 1 01:29:37.001 }, 01:29:37.001 { 01:29:37.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:37.001 "dma_device_type": 2 01:29:37.001 } 01:29:37.001 ], 01:29:37.001 "driver_specific": {} 01:29:37.001 } 01:29:37.001 ] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 BaseBdev3 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.001 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.001 [ 01:29:37.001 { 01:29:37.001 "name": "BaseBdev3", 01:29:37.001 "aliases": [ 01:29:37.001 "3690dc61-6a50-4795-9751-3cf04c766871" 01:29:37.001 ], 01:29:37.001 "product_name": "Malloc disk", 01:29:37.001 "block_size": 512, 01:29:37.001 "num_blocks": 65536, 01:29:37.001 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:37.001 "assigned_rate_limits": { 01:29:37.001 "rw_ios_per_sec": 0, 01:29:37.001 "rw_mbytes_per_sec": 0, 01:29:37.001 "r_mbytes_per_sec": 0, 01:29:37.001 "w_mbytes_per_sec": 0 01:29:37.001 }, 01:29:37.001 "claimed": false, 01:29:37.001 "zoned": false, 01:29:37.001 "supported_io_types": { 01:29:37.001 "read": true, 01:29:37.001 "write": true, 01:29:37.002 "unmap": true, 01:29:37.002 "flush": true, 01:29:37.002 "reset": true, 01:29:37.002 "nvme_admin": false, 01:29:37.002 "nvme_io": false, 01:29:37.002 "nvme_io_md": false, 01:29:37.002 "write_zeroes": true, 01:29:37.002 "zcopy": true, 01:29:37.002 "get_zone_info": false, 01:29:37.002 "zone_management": false, 01:29:37.002 "zone_append": false, 01:29:37.002 "compare": false, 01:29:37.002 "compare_and_write": false, 01:29:37.002 "abort": true, 01:29:37.002 "seek_hole": false, 01:29:37.002 "seek_data": false, 01:29:37.002 "copy": true, 01:29:37.002 "nvme_iov_md": false 01:29:37.002 }, 01:29:37.002 "memory_domains": [ 01:29:37.002 { 01:29:37.002 "dma_device_id": "system", 01:29:37.002 "dma_device_type": 1 01:29:37.002 }, 01:29:37.002 { 01:29:37.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:37.002 "dma_device_type": 2 01:29:37.002 } 01:29:37.002 ], 01:29:37.002 "driver_specific": {} 01:29:37.002 } 01:29:37.002 ] 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.002 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.002 [2024-12-09 05:24:28.612189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:29:37.002 [2024-12-09 05:24:28.612261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:29:37.002 [2024-12-09 05:24:28.612298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:37.261 [2024-12-09 05:24:28.615346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:37.261 "name": "Existed_Raid", 01:29:37.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:37.261 "strip_size_kb": 64, 01:29:37.261 "state": "configuring", 01:29:37.261 "raid_level": "raid5f", 01:29:37.261 "superblock": false, 01:29:37.261 "num_base_bdevs": 3, 01:29:37.261 "num_base_bdevs_discovered": 2, 01:29:37.261 "num_base_bdevs_operational": 3, 01:29:37.261 "base_bdevs_list": [ 01:29:37.261 { 01:29:37.261 "name": "BaseBdev1", 01:29:37.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:37.261 "is_configured": false, 01:29:37.261 "data_offset": 0, 01:29:37.261 "data_size": 0 01:29:37.261 }, 01:29:37.261 { 01:29:37.261 "name": "BaseBdev2", 01:29:37.261 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:37.261 "is_configured": true, 01:29:37.261 "data_offset": 0, 01:29:37.261 "data_size": 65536 01:29:37.261 }, 01:29:37.261 { 01:29:37.261 "name": "BaseBdev3", 01:29:37.261 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:37.261 "is_configured": true, 01:29:37.261 "data_offset": 0, 01:29:37.261 "data_size": 65536 01:29:37.261 } 01:29:37.261 ] 01:29:37.261 }' 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:37.261 05:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.829 [2024-12-09 05:24:29.144436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:37.829 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:37.829 "name": "Existed_Raid", 01:29:37.829 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:37.829 "strip_size_kb": 64, 01:29:37.829 "state": "configuring", 01:29:37.829 "raid_level": "raid5f", 01:29:37.829 "superblock": false, 01:29:37.829 "num_base_bdevs": 3, 01:29:37.829 "num_base_bdevs_discovered": 1, 01:29:37.829 "num_base_bdevs_operational": 3, 01:29:37.829 "base_bdevs_list": [ 01:29:37.829 { 01:29:37.829 "name": "BaseBdev1", 01:29:37.829 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:37.829 "is_configured": false, 01:29:37.829 "data_offset": 0, 01:29:37.829 "data_size": 0 01:29:37.829 }, 01:29:37.829 { 01:29:37.829 "name": null, 01:29:37.829 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:37.829 "is_configured": false, 01:29:37.829 "data_offset": 0, 01:29:37.829 "data_size": 65536 01:29:37.829 }, 01:29:37.829 { 01:29:37.829 "name": "BaseBdev3", 01:29:37.830 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:37.830 "is_configured": true, 01:29:37.830 "data_offset": 0, 01:29:37.830 "data_size": 65536 01:29:37.830 } 01:29:37.830 ] 01:29:37.830 }' 01:29:37.830 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:37.830 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.088 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:38.088 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.088 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.088 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:29:38.088 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.347 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:29:38.347 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.348 [2024-12-09 05:24:29.776695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:38.348 BaseBdev1 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.348 [ 01:29:38.348 { 01:29:38.348 "name": "BaseBdev1", 01:29:38.348 "aliases": [ 01:29:38.348 "a789c9fb-2efb-411f-9df5-32b7e09f7476" 01:29:38.348 ], 01:29:38.348 "product_name": "Malloc disk", 01:29:38.348 "block_size": 512, 01:29:38.348 "num_blocks": 65536, 01:29:38.348 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:38.348 "assigned_rate_limits": { 01:29:38.348 "rw_ios_per_sec": 0, 01:29:38.348 "rw_mbytes_per_sec": 0, 01:29:38.348 "r_mbytes_per_sec": 0, 01:29:38.348 "w_mbytes_per_sec": 0 01:29:38.348 }, 01:29:38.348 "claimed": true, 01:29:38.348 "claim_type": "exclusive_write", 01:29:38.348 "zoned": false, 01:29:38.348 "supported_io_types": { 01:29:38.348 "read": true, 01:29:38.348 "write": true, 01:29:38.348 "unmap": true, 01:29:38.348 "flush": true, 01:29:38.348 "reset": true, 01:29:38.348 "nvme_admin": false, 01:29:38.348 "nvme_io": false, 01:29:38.348 "nvme_io_md": false, 01:29:38.348 "write_zeroes": true, 01:29:38.348 "zcopy": true, 01:29:38.348 "get_zone_info": false, 01:29:38.348 "zone_management": false, 01:29:38.348 "zone_append": false, 01:29:38.348 "compare": false, 01:29:38.348 "compare_and_write": false, 01:29:38.348 "abort": true, 01:29:38.348 "seek_hole": false, 01:29:38.348 "seek_data": false, 01:29:38.348 "copy": true, 01:29:38.348 "nvme_iov_md": false 01:29:38.348 }, 01:29:38.348 "memory_domains": [ 01:29:38.348 { 01:29:38.348 "dma_device_id": "system", 01:29:38.348 "dma_device_type": 1 01:29:38.348 }, 01:29:38.348 { 01:29:38.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:38.348 "dma_device_type": 2 01:29:38.348 } 01:29:38.348 ], 01:29:38.348 "driver_specific": {} 01:29:38.348 } 01:29:38.348 ] 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:38.348 "name": "Existed_Raid", 01:29:38.348 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:38.348 "strip_size_kb": 64, 01:29:38.348 "state": "configuring", 01:29:38.348 "raid_level": "raid5f", 01:29:38.348 "superblock": false, 01:29:38.348 "num_base_bdevs": 3, 01:29:38.348 "num_base_bdevs_discovered": 2, 01:29:38.348 "num_base_bdevs_operational": 3, 01:29:38.348 "base_bdevs_list": [ 01:29:38.348 { 01:29:38.348 "name": "BaseBdev1", 01:29:38.348 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:38.348 "is_configured": true, 01:29:38.348 "data_offset": 0, 01:29:38.348 "data_size": 65536 01:29:38.348 }, 01:29:38.348 { 01:29:38.348 "name": null, 01:29:38.348 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:38.348 "is_configured": false, 01:29:38.348 "data_offset": 0, 01:29:38.348 "data_size": 65536 01:29:38.348 }, 01:29:38.348 { 01:29:38.348 "name": "BaseBdev3", 01:29:38.348 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:38.348 "is_configured": true, 01:29:38.348 "data_offset": 0, 01:29:38.348 "data_size": 65536 01:29:38.348 } 01:29:38.348 ] 01:29:38.348 }' 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:38.348 05:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.916 [2024-12-09 05:24:30.388920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:38.916 "name": "Existed_Raid", 01:29:38.916 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:38.916 "strip_size_kb": 64, 01:29:38.916 "state": "configuring", 01:29:38.916 "raid_level": "raid5f", 01:29:38.916 "superblock": false, 01:29:38.916 "num_base_bdevs": 3, 01:29:38.916 "num_base_bdevs_discovered": 1, 01:29:38.916 "num_base_bdevs_operational": 3, 01:29:38.916 "base_bdevs_list": [ 01:29:38.916 { 01:29:38.916 "name": "BaseBdev1", 01:29:38.916 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:38.916 "is_configured": true, 01:29:38.916 "data_offset": 0, 01:29:38.916 "data_size": 65536 01:29:38.916 }, 01:29:38.916 { 01:29:38.916 "name": null, 01:29:38.916 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:38.916 "is_configured": false, 01:29:38.916 "data_offset": 0, 01:29:38.916 "data_size": 65536 01:29:38.916 }, 01:29:38.916 { 01:29:38.916 "name": null, 01:29:38.916 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:38.916 "is_configured": false, 01:29:38.916 "data_offset": 0, 01:29:38.916 "data_size": 65536 01:29:38.916 } 01:29:38.916 ] 01:29:38.916 }' 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:38.916 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:39.483 [2024-12-09 05:24:30.985220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:39.483 05:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:39.483 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:39.483 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:39.483 "name": "Existed_Raid", 01:29:39.483 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:39.483 "strip_size_kb": 64, 01:29:39.483 "state": "configuring", 01:29:39.483 "raid_level": "raid5f", 01:29:39.483 "superblock": false, 01:29:39.483 "num_base_bdevs": 3, 01:29:39.483 "num_base_bdevs_discovered": 2, 01:29:39.483 "num_base_bdevs_operational": 3, 01:29:39.483 "base_bdevs_list": [ 01:29:39.483 { 01:29:39.483 "name": "BaseBdev1", 01:29:39.483 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:39.483 "is_configured": true, 01:29:39.483 "data_offset": 0, 01:29:39.483 "data_size": 65536 01:29:39.483 }, 01:29:39.483 { 01:29:39.483 "name": null, 01:29:39.483 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:39.483 "is_configured": false, 01:29:39.483 "data_offset": 0, 01:29:39.483 "data_size": 65536 01:29:39.483 }, 01:29:39.483 { 01:29:39.483 "name": "BaseBdev3", 01:29:39.483 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:39.483 "is_configured": true, 01:29:39.483 "data_offset": 0, 01:29:39.483 "data_size": 65536 01:29:39.483 } 01:29:39.483 ] 01:29:39.483 }' 01:29:39.483 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:39.483 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.057 [2024-12-09 05:24:31.569450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:40.057 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:40.329 "name": "Existed_Raid", 01:29:40.329 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:40.329 "strip_size_kb": 64, 01:29:40.329 "state": "configuring", 01:29:40.329 "raid_level": "raid5f", 01:29:40.329 "superblock": false, 01:29:40.329 "num_base_bdevs": 3, 01:29:40.329 "num_base_bdevs_discovered": 1, 01:29:40.329 "num_base_bdevs_operational": 3, 01:29:40.329 "base_bdevs_list": [ 01:29:40.329 { 01:29:40.329 "name": null, 01:29:40.329 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:40.329 "is_configured": false, 01:29:40.329 "data_offset": 0, 01:29:40.329 "data_size": 65536 01:29:40.329 }, 01:29:40.329 { 01:29:40.329 "name": null, 01:29:40.329 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:40.329 "is_configured": false, 01:29:40.329 "data_offset": 0, 01:29:40.329 "data_size": 65536 01:29:40.329 }, 01:29:40.329 { 01:29:40.329 "name": "BaseBdev3", 01:29:40.329 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:40.329 "is_configured": true, 01:29:40.329 "data_offset": 0, 01:29:40.329 "data_size": 65536 01:29:40.329 } 01:29:40.329 ] 01:29:40.329 }' 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:40.329 05:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.587 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:40.587 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:29:40.587 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:40.587 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.844 [2024-12-09 05:24:32.248215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:40.844 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:40.844 "name": "Existed_Raid", 01:29:40.844 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:40.844 "strip_size_kb": 64, 01:29:40.844 "state": "configuring", 01:29:40.844 "raid_level": "raid5f", 01:29:40.844 "superblock": false, 01:29:40.844 "num_base_bdevs": 3, 01:29:40.844 "num_base_bdevs_discovered": 2, 01:29:40.844 "num_base_bdevs_operational": 3, 01:29:40.844 "base_bdevs_list": [ 01:29:40.844 { 01:29:40.844 "name": null, 01:29:40.844 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:40.844 "is_configured": false, 01:29:40.844 "data_offset": 0, 01:29:40.844 "data_size": 65536 01:29:40.844 }, 01:29:40.844 { 01:29:40.844 "name": "BaseBdev2", 01:29:40.844 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:40.844 "is_configured": true, 01:29:40.844 "data_offset": 0, 01:29:40.844 "data_size": 65536 01:29:40.844 }, 01:29:40.844 { 01:29:40.844 "name": "BaseBdev3", 01:29:40.844 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:40.844 "is_configured": true, 01:29:40.844 "data_offset": 0, 01:29:40.844 "data_size": 65536 01:29:40.844 } 01:29:40.844 ] 01:29:40.844 }' 01:29:40.845 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:40.845 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a789c9fb-2efb-411f-9df5-32b7e09f7476 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 [2024-12-09 05:24:32.934631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:29:41.409 [2024-12-09 05:24:32.934918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:29:41.409 [2024-12-09 05:24:32.934949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:29:41.409 [2024-12-09 05:24:32.935308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:29:41.409 NewBaseBdev 01:29:41.409 [2024-12-09 05:24:32.940261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:29:41.409 [2024-12-09 05:24:32.940286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:29:41.409 [2024-12-09 05:24:32.940608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 [ 01:29:41.409 { 01:29:41.409 "name": "NewBaseBdev", 01:29:41.409 "aliases": [ 01:29:41.409 "a789c9fb-2efb-411f-9df5-32b7e09f7476" 01:29:41.409 ], 01:29:41.409 "product_name": "Malloc disk", 01:29:41.409 "block_size": 512, 01:29:41.409 "num_blocks": 65536, 01:29:41.409 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:41.409 "assigned_rate_limits": { 01:29:41.409 "rw_ios_per_sec": 0, 01:29:41.409 "rw_mbytes_per_sec": 0, 01:29:41.409 "r_mbytes_per_sec": 0, 01:29:41.409 "w_mbytes_per_sec": 0 01:29:41.409 }, 01:29:41.409 "claimed": true, 01:29:41.409 "claim_type": "exclusive_write", 01:29:41.409 "zoned": false, 01:29:41.409 "supported_io_types": { 01:29:41.409 "read": true, 01:29:41.409 "write": true, 01:29:41.409 "unmap": true, 01:29:41.409 "flush": true, 01:29:41.409 "reset": true, 01:29:41.409 "nvme_admin": false, 01:29:41.409 "nvme_io": false, 01:29:41.409 "nvme_io_md": false, 01:29:41.409 "write_zeroes": true, 01:29:41.409 "zcopy": true, 01:29:41.409 "get_zone_info": false, 01:29:41.409 "zone_management": false, 01:29:41.409 "zone_append": false, 01:29:41.409 "compare": false, 01:29:41.409 "compare_and_write": false, 01:29:41.409 "abort": true, 01:29:41.409 "seek_hole": false, 01:29:41.409 "seek_data": false, 01:29:41.409 "copy": true, 01:29:41.409 "nvme_iov_md": false 01:29:41.409 }, 01:29:41.409 "memory_domains": [ 01:29:41.409 { 01:29:41.409 "dma_device_id": "system", 01:29:41.409 "dma_device_type": 1 01:29:41.409 }, 01:29:41.409 { 01:29:41.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:41.409 "dma_device_type": 2 01:29:41.409 } 01:29:41.409 ], 01:29:41.409 "driver_specific": {} 01:29:41.409 } 01:29:41.409 ] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:41.409 05:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.666 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:41.666 "name": "Existed_Raid", 01:29:41.666 "uuid": "f6bfd46f-3dec-4ba5-ac38-19eae9177035", 01:29:41.666 "strip_size_kb": 64, 01:29:41.666 "state": "online", 01:29:41.666 "raid_level": "raid5f", 01:29:41.666 "superblock": false, 01:29:41.666 "num_base_bdevs": 3, 01:29:41.666 "num_base_bdevs_discovered": 3, 01:29:41.666 "num_base_bdevs_operational": 3, 01:29:41.666 "base_bdevs_list": [ 01:29:41.666 { 01:29:41.666 "name": "NewBaseBdev", 01:29:41.666 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:41.666 "is_configured": true, 01:29:41.666 "data_offset": 0, 01:29:41.667 "data_size": 65536 01:29:41.667 }, 01:29:41.667 { 01:29:41.667 "name": "BaseBdev2", 01:29:41.667 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:41.667 "is_configured": true, 01:29:41.667 "data_offset": 0, 01:29:41.667 "data_size": 65536 01:29:41.667 }, 01:29:41.667 { 01:29:41.667 "name": "BaseBdev3", 01:29:41.667 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:41.667 "is_configured": true, 01:29:41.667 "data_offset": 0, 01:29:41.667 "data_size": 65536 01:29:41.667 } 01:29:41.667 ] 01:29:41.667 }' 01:29:41.667 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:41.667 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:41.924 [2024-12-09 05:24:33.515129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:41.924 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:42.182 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:29:42.182 "name": "Existed_Raid", 01:29:42.182 "aliases": [ 01:29:42.182 "f6bfd46f-3dec-4ba5-ac38-19eae9177035" 01:29:42.182 ], 01:29:42.182 "product_name": "Raid Volume", 01:29:42.182 "block_size": 512, 01:29:42.182 "num_blocks": 131072, 01:29:42.182 "uuid": "f6bfd46f-3dec-4ba5-ac38-19eae9177035", 01:29:42.182 "assigned_rate_limits": { 01:29:42.182 "rw_ios_per_sec": 0, 01:29:42.182 "rw_mbytes_per_sec": 0, 01:29:42.182 "r_mbytes_per_sec": 0, 01:29:42.182 "w_mbytes_per_sec": 0 01:29:42.182 }, 01:29:42.182 "claimed": false, 01:29:42.182 "zoned": false, 01:29:42.182 "supported_io_types": { 01:29:42.182 "read": true, 01:29:42.182 "write": true, 01:29:42.182 "unmap": false, 01:29:42.182 "flush": false, 01:29:42.182 "reset": true, 01:29:42.182 "nvme_admin": false, 01:29:42.182 "nvme_io": false, 01:29:42.182 "nvme_io_md": false, 01:29:42.182 "write_zeroes": true, 01:29:42.182 "zcopy": false, 01:29:42.182 "get_zone_info": false, 01:29:42.182 "zone_management": false, 01:29:42.182 "zone_append": false, 01:29:42.182 "compare": false, 01:29:42.182 "compare_and_write": false, 01:29:42.182 "abort": false, 01:29:42.182 "seek_hole": false, 01:29:42.182 "seek_data": false, 01:29:42.182 "copy": false, 01:29:42.182 "nvme_iov_md": false 01:29:42.182 }, 01:29:42.182 "driver_specific": { 01:29:42.182 "raid": { 01:29:42.182 "uuid": "f6bfd46f-3dec-4ba5-ac38-19eae9177035", 01:29:42.182 "strip_size_kb": 64, 01:29:42.182 "state": "online", 01:29:42.182 "raid_level": "raid5f", 01:29:42.182 "superblock": false, 01:29:42.182 "num_base_bdevs": 3, 01:29:42.182 "num_base_bdevs_discovered": 3, 01:29:42.182 "num_base_bdevs_operational": 3, 01:29:42.182 "base_bdevs_list": [ 01:29:42.182 { 01:29:42.182 "name": "NewBaseBdev", 01:29:42.182 "uuid": "a789c9fb-2efb-411f-9df5-32b7e09f7476", 01:29:42.182 "is_configured": true, 01:29:42.182 "data_offset": 0, 01:29:42.182 "data_size": 65536 01:29:42.182 }, 01:29:42.182 { 01:29:42.182 "name": "BaseBdev2", 01:29:42.182 "uuid": "94f53639-bce4-4b41-8e1b-65e36e8f4e2f", 01:29:42.182 "is_configured": true, 01:29:42.182 "data_offset": 0, 01:29:42.182 "data_size": 65536 01:29:42.182 }, 01:29:42.182 { 01:29:42.182 "name": "BaseBdev3", 01:29:42.182 "uuid": "3690dc61-6a50-4795-9751-3cf04c766871", 01:29:42.182 "is_configured": true, 01:29:42.182 "data_offset": 0, 01:29:42.182 "data_size": 65536 01:29:42.182 } 01:29:42.182 ] 01:29:42.182 } 01:29:42.182 } 01:29:42.182 }' 01:29:42.182 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:29:42.182 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:29:42.182 BaseBdev2 01:29:42.183 BaseBdev3' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:42.183 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:42.441 [2024-12-09 05:24:33.830932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:29:42.441 [2024-12-09 05:24:33.831088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:42.441 [2024-12-09 05:24:33.831408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:42.441 [2024-12-09 05:24:33.831818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:29:42.441 [2024-12-09 05:24:33.831854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80160 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80160 ']' 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80160 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80160 01:29:42.441 killing process with pid 80160 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80160' 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80160 01:29:42.441 [2024-12-09 05:24:33.871004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:29:42.441 05:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80160 01:29:42.699 [2024-12-09 05:24:34.125534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:29:43.633 05:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:29:43.633 01:29:43.633 real 0m12.283s 01:29:43.633 user 0m20.405s 01:29:43.633 sys 0m1.699s 01:29:43.633 05:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:43.633 ************************************ 01:29:43.633 END TEST raid5f_state_function_test 01:29:43.633 ************************************ 01:29:43.633 05:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:29:43.891 05:24:35 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 01:29:43.891 05:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:29:43.891 05:24:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:43.891 05:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:29:43.891 ************************************ 01:29:43.891 START TEST raid5f_state_function_test_sb 01:29:43.891 ************************************ 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80796 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80796' 01:29:43.891 Process raid pid: 80796 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80796 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80796 ']' 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:43.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:43.891 05:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:43.891 [2024-12-09 05:24:35.375703] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:43.891 [2024-12-09 05:24:35.376149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:29:44.149 [2024-12-09 05:24:35.547008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:44.149 [2024-12-09 05:24:35.678577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:44.408 [2024-12-09 05:24:35.901646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:44.408 [2024-12-09 05:24:35.901691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:44.974 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:44.974 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:29:44.974 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:44.974 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:44.974 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:44.974 [2024-12-09 05:24:36.379816] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:29:44.975 [2024-12-09 05:24:36.380067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:29:44.975 [2024-12-09 05:24:36.380096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:29:44.975 [2024-12-09 05:24:36.380115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:29:44.975 [2024-12-09 05:24:36.380127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:29:44.975 [2024-12-09 05:24:36.380142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:44.975 "name": "Existed_Raid", 01:29:44.975 "uuid": "9df02d8e-a04e-49d2-8676-e8c0e7d8424c", 01:29:44.975 "strip_size_kb": 64, 01:29:44.975 "state": "configuring", 01:29:44.975 "raid_level": "raid5f", 01:29:44.975 "superblock": true, 01:29:44.975 "num_base_bdevs": 3, 01:29:44.975 "num_base_bdevs_discovered": 0, 01:29:44.975 "num_base_bdevs_operational": 3, 01:29:44.975 "base_bdevs_list": [ 01:29:44.975 { 01:29:44.975 "name": "BaseBdev1", 01:29:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:44.975 "is_configured": false, 01:29:44.975 "data_offset": 0, 01:29:44.975 "data_size": 0 01:29:44.975 }, 01:29:44.975 { 01:29:44.975 "name": "BaseBdev2", 01:29:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:44.975 "is_configured": false, 01:29:44.975 "data_offset": 0, 01:29:44.975 "data_size": 0 01:29:44.975 }, 01:29:44.975 { 01:29:44.975 "name": "BaseBdev3", 01:29:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:44.975 "is_configured": false, 01:29:44.975 "data_offset": 0, 01:29:44.975 "data_size": 0 01:29:44.975 } 01:29:44.975 ] 01:29:44.975 }' 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:44.975 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.541 [2024-12-09 05:24:36.904219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:29:45.541 [2024-12-09 05:24:36.904270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:45.541 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.541 [2024-12-09 05:24:36.912202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:29:45.541 [2024-12-09 05:24:36.912279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:29:45.542 [2024-12-09 05:24:36.912294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:29:45.542 [2024-12-09 05:24:36.912310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:29:45.542 [2024-12-09 05:24:36.912319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:29:45.542 [2024-12-09 05:24:36.912333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.542 [2024-12-09 05:24:36.956867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:45.542 BaseBdev1 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.542 [ 01:29:45.542 { 01:29:45.542 "name": "BaseBdev1", 01:29:45.542 "aliases": [ 01:29:45.542 "d106cb9d-6dee-4306-8edf-e955d6573442" 01:29:45.542 ], 01:29:45.542 "product_name": "Malloc disk", 01:29:45.542 "block_size": 512, 01:29:45.542 "num_blocks": 65536, 01:29:45.542 "uuid": "d106cb9d-6dee-4306-8edf-e955d6573442", 01:29:45.542 "assigned_rate_limits": { 01:29:45.542 "rw_ios_per_sec": 0, 01:29:45.542 "rw_mbytes_per_sec": 0, 01:29:45.542 "r_mbytes_per_sec": 0, 01:29:45.542 "w_mbytes_per_sec": 0 01:29:45.542 }, 01:29:45.542 "claimed": true, 01:29:45.542 "claim_type": "exclusive_write", 01:29:45.542 "zoned": false, 01:29:45.542 "supported_io_types": { 01:29:45.542 "read": true, 01:29:45.542 "write": true, 01:29:45.542 "unmap": true, 01:29:45.542 "flush": true, 01:29:45.542 "reset": true, 01:29:45.542 "nvme_admin": false, 01:29:45.542 "nvme_io": false, 01:29:45.542 "nvme_io_md": false, 01:29:45.542 "write_zeroes": true, 01:29:45.542 "zcopy": true, 01:29:45.542 "get_zone_info": false, 01:29:45.542 "zone_management": false, 01:29:45.542 "zone_append": false, 01:29:45.542 "compare": false, 01:29:45.542 "compare_and_write": false, 01:29:45.542 "abort": true, 01:29:45.542 "seek_hole": false, 01:29:45.542 "seek_data": false, 01:29:45.542 "copy": true, 01:29:45.542 "nvme_iov_md": false 01:29:45.542 }, 01:29:45.542 "memory_domains": [ 01:29:45.542 { 01:29:45.542 "dma_device_id": "system", 01:29:45.542 "dma_device_type": 1 01:29:45.542 }, 01:29:45.542 { 01:29:45.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:45.542 "dma_device_type": 2 01:29:45.542 } 01:29:45.542 ], 01:29:45.542 "driver_specific": {} 01:29:45.542 } 01:29:45.542 ] 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:45.542 05:24:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:45.542 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:45.542 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:45.542 "name": "Existed_Raid", 01:29:45.542 "uuid": "84cb68a5-4479-4da2-bce8-9af7c2fb9b56", 01:29:45.542 "strip_size_kb": 64, 01:29:45.542 "state": "configuring", 01:29:45.542 "raid_level": "raid5f", 01:29:45.542 "superblock": true, 01:29:45.542 "num_base_bdevs": 3, 01:29:45.542 "num_base_bdevs_discovered": 1, 01:29:45.542 "num_base_bdevs_operational": 3, 01:29:45.542 "base_bdevs_list": [ 01:29:45.542 { 01:29:45.542 "name": "BaseBdev1", 01:29:45.542 "uuid": "d106cb9d-6dee-4306-8edf-e955d6573442", 01:29:45.542 "is_configured": true, 01:29:45.542 "data_offset": 2048, 01:29:45.542 "data_size": 63488 01:29:45.542 }, 01:29:45.542 { 01:29:45.542 "name": "BaseBdev2", 01:29:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:45.542 "is_configured": false, 01:29:45.542 "data_offset": 0, 01:29:45.542 "data_size": 0 01:29:45.542 }, 01:29:45.542 { 01:29:45.542 "name": "BaseBdev3", 01:29:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:45.542 "is_configured": false, 01:29:45.542 "data_offset": 0, 01:29:45.542 "data_size": 0 01:29:45.542 } 01:29:45.542 ] 01:29:45.542 }' 01:29:45.542 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:45.542 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.109 [2024-12-09 05:24:37.497105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:29:46.109 [2024-12-09 05:24:37.497168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.109 [2024-12-09 05:24:37.505172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:46.109 [2024-12-09 05:24:37.507889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:29:46.109 [2024-12-09 05:24:37.508094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:29:46.109 [2024-12-09 05:24:37.508123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:29:46.109 [2024-12-09 05:24:37.508142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:46.109 "name": "Existed_Raid", 01:29:46.109 "uuid": "53975deb-c84e-4f6a-8041-ef78747ec880", 01:29:46.109 "strip_size_kb": 64, 01:29:46.109 "state": "configuring", 01:29:46.109 "raid_level": "raid5f", 01:29:46.109 "superblock": true, 01:29:46.109 "num_base_bdevs": 3, 01:29:46.109 "num_base_bdevs_discovered": 1, 01:29:46.109 "num_base_bdevs_operational": 3, 01:29:46.109 "base_bdevs_list": [ 01:29:46.109 { 01:29:46.109 "name": "BaseBdev1", 01:29:46.109 "uuid": "d106cb9d-6dee-4306-8edf-e955d6573442", 01:29:46.109 "is_configured": true, 01:29:46.109 "data_offset": 2048, 01:29:46.109 "data_size": 63488 01:29:46.109 }, 01:29:46.109 { 01:29:46.109 "name": "BaseBdev2", 01:29:46.109 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:46.109 "is_configured": false, 01:29:46.109 "data_offset": 0, 01:29:46.109 "data_size": 0 01:29:46.109 }, 01:29:46.109 { 01:29:46.109 "name": "BaseBdev3", 01:29:46.109 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:46.109 "is_configured": false, 01:29:46.109 "data_offset": 0, 01:29:46.109 "data_size": 0 01:29:46.109 } 01:29:46.109 ] 01:29:46.109 }' 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:46.109 05:24:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.676 [2024-12-09 05:24:38.066072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:46.676 BaseBdev2 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.676 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.676 [ 01:29:46.676 { 01:29:46.676 "name": "BaseBdev2", 01:29:46.676 "aliases": [ 01:29:46.676 "db7d0980-4845-4098-91cc-6ccffa5a63a5" 01:29:46.676 ], 01:29:46.676 "product_name": "Malloc disk", 01:29:46.676 "block_size": 512, 01:29:46.676 "num_blocks": 65536, 01:29:46.676 "uuid": "db7d0980-4845-4098-91cc-6ccffa5a63a5", 01:29:46.676 "assigned_rate_limits": { 01:29:46.676 "rw_ios_per_sec": 0, 01:29:46.676 "rw_mbytes_per_sec": 0, 01:29:46.676 "r_mbytes_per_sec": 0, 01:29:46.676 "w_mbytes_per_sec": 0 01:29:46.676 }, 01:29:46.676 "claimed": true, 01:29:46.676 "claim_type": "exclusive_write", 01:29:46.676 "zoned": false, 01:29:46.676 "supported_io_types": { 01:29:46.676 "read": true, 01:29:46.676 "write": true, 01:29:46.676 "unmap": true, 01:29:46.676 "flush": true, 01:29:46.676 "reset": true, 01:29:46.676 "nvme_admin": false, 01:29:46.676 "nvme_io": false, 01:29:46.676 "nvme_io_md": false, 01:29:46.676 "write_zeroes": true, 01:29:46.676 "zcopy": true, 01:29:46.676 "get_zone_info": false, 01:29:46.676 "zone_management": false, 01:29:46.676 "zone_append": false, 01:29:46.676 "compare": false, 01:29:46.676 "compare_and_write": false, 01:29:46.676 "abort": true, 01:29:46.677 "seek_hole": false, 01:29:46.677 "seek_data": false, 01:29:46.677 "copy": true, 01:29:46.677 "nvme_iov_md": false 01:29:46.677 }, 01:29:46.677 "memory_domains": [ 01:29:46.677 { 01:29:46.677 "dma_device_id": "system", 01:29:46.677 "dma_device_type": 1 01:29:46.677 }, 01:29:46.677 { 01:29:46.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:46.677 "dma_device_type": 2 01:29:46.677 } 01:29:46.677 ], 01:29:46.677 "driver_specific": {} 01:29:46.677 } 01:29:46.677 ] 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:46.677 "name": "Existed_Raid", 01:29:46.677 "uuid": "53975deb-c84e-4f6a-8041-ef78747ec880", 01:29:46.677 "strip_size_kb": 64, 01:29:46.677 "state": "configuring", 01:29:46.677 "raid_level": "raid5f", 01:29:46.677 "superblock": true, 01:29:46.677 "num_base_bdevs": 3, 01:29:46.677 "num_base_bdevs_discovered": 2, 01:29:46.677 "num_base_bdevs_operational": 3, 01:29:46.677 "base_bdevs_list": [ 01:29:46.677 { 01:29:46.677 "name": "BaseBdev1", 01:29:46.677 "uuid": "d106cb9d-6dee-4306-8edf-e955d6573442", 01:29:46.677 "is_configured": true, 01:29:46.677 "data_offset": 2048, 01:29:46.677 "data_size": 63488 01:29:46.677 }, 01:29:46.677 { 01:29:46.677 "name": "BaseBdev2", 01:29:46.677 "uuid": "db7d0980-4845-4098-91cc-6ccffa5a63a5", 01:29:46.677 "is_configured": true, 01:29:46.677 "data_offset": 2048, 01:29:46.677 "data_size": 63488 01:29:46.677 }, 01:29:46.677 { 01:29:46.677 "name": "BaseBdev3", 01:29:46.677 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:46.677 "is_configured": false, 01:29:46.677 "data_offset": 0, 01:29:46.677 "data_size": 0 01:29:46.677 } 01:29:46.677 ] 01:29:46.677 }' 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:46.677 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.243 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.244 [2024-12-09 05:24:38.659859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:47.244 [2024-12-09 05:24:38.660524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:29:47.244 [2024-12-09 05:24:38.660560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:29:47.244 BaseBdev3 01:29:47.244 [2024-12-09 05:24:38.660919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.244 [2024-12-09 05:24:38.666155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:29:47.244 [2024-12-09 05:24:38.666180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:29:47.244 [2024-12-09 05:24:38.666712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.244 [ 01:29:47.244 { 01:29:47.244 "name": "BaseBdev3", 01:29:47.244 "aliases": [ 01:29:47.244 "e434e316-1039-462b-bb0a-906632494500" 01:29:47.244 ], 01:29:47.244 "product_name": "Malloc disk", 01:29:47.244 "block_size": 512, 01:29:47.244 "num_blocks": 65536, 01:29:47.244 "uuid": "e434e316-1039-462b-bb0a-906632494500", 01:29:47.244 "assigned_rate_limits": { 01:29:47.244 "rw_ios_per_sec": 0, 01:29:47.244 "rw_mbytes_per_sec": 0, 01:29:47.244 "r_mbytes_per_sec": 0, 01:29:47.244 "w_mbytes_per_sec": 0 01:29:47.244 }, 01:29:47.244 "claimed": true, 01:29:47.244 "claim_type": "exclusive_write", 01:29:47.244 "zoned": false, 01:29:47.244 "supported_io_types": { 01:29:47.244 "read": true, 01:29:47.244 "write": true, 01:29:47.244 "unmap": true, 01:29:47.244 "flush": true, 01:29:47.244 "reset": true, 01:29:47.244 "nvme_admin": false, 01:29:47.244 "nvme_io": false, 01:29:47.244 "nvme_io_md": false, 01:29:47.244 "write_zeroes": true, 01:29:47.244 "zcopy": true, 01:29:47.244 "get_zone_info": false, 01:29:47.244 "zone_management": false, 01:29:47.244 "zone_append": false, 01:29:47.244 "compare": false, 01:29:47.244 "compare_and_write": false, 01:29:47.244 "abort": true, 01:29:47.244 "seek_hole": false, 01:29:47.244 "seek_data": false, 01:29:47.244 "copy": true, 01:29:47.244 "nvme_iov_md": false 01:29:47.244 }, 01:29:47.244 "memory_domains": [ 01:29:47.244 { 01:29:47.244 "dma_device_id": "system", 01:29:47.244 "dma_device_type": 1 01:29:47.244 }, 01:29:47.244 { 01:29:47.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:47.244 "dma_device_type": 2 01:29:47.244 } 01:29:47.244 ], 01:29:47.244 "driver_specific": {} 01:29:47.244 } 01:29:47.244 ] 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:47.244 "name": "Existed_Raid", 01:29:47.244 "uuid": "53975deb-c84e-4f6a-8041-ef78747ec880", 01:29:47.244 "strip_size_kb": 64, 01:29:47.244 "state": "online", 01:29:47.244 "raid_level": "raid5f", 01:29:47.244 "superblock": true, 01:29:47.244 "num_base_bdevs": 3, 01:29:47.244 "num_base_bdevs_discovered": 3, 01:29:47.244 "num_base_bdevs_operational": 3, 01:29:47.244 "base_bdevs_list": [ 01:29:47.244 { 01:29:47.244 "name": "BaseBdev1", 01:29:47.244 "uuid": "d106cb9d-6dee-4306-8edf-e955d6573442", 01:29:47.244 "is_configured": true, 01:29:47.244 "data_offset": 2048, 01:29:47.244 "data_size": 63488 01:29:47.244 }, 01:29:47.244 { 01:29:47.244 "name": "BaseBdev2", 01:29:47.244 "uuid": "db7d0980-4845-4098-91cc-6ccffa5a63a5", 01:29:47.244 "is_configured": true, 01:29:47.244 "data_offset": 2048, 01:29:47.244 "data_size": 63488 01:29:47.244 }, 01:29:47.244 { 01:29:47.244 "name": "BaseBdev3", 01:29:47.244 "uuid": "e434e316-1039-462b-bb0a-906632494500", 01:29:47.244 "is_configured": true, 01:29:47.244 "data_offset": 2048, 01:29:47.244 "data_size": 63488 01:29:47.244 } 01:29:47.244 ] 01:29:47.244 }' 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:47.244 05:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.811 [2024-12-09 05:24:39.228783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:29:47.811 "name": "Existed_Raid", 01:29:47.811 "aliases": [ 01:29:47.811 "53975deb-c84e-4f6a-8041-ef78747ec880" 01:29:47.811 ], 01:29:47.811 "product_name": "Raid Volume", 01:29:47.811 "block_size": 512, 01:29:47.811 "num_blocks": 126976, 01:29:47.811 "uuid": "53975deb-c84e-4f6a-8041-ef78747ec880", 01:29:47.811 "assigned_rate_limits": { 01:29:47.811 "rw_ios_per_sec": 0, 01:29:47.811 "rw_mbytes_per_sec": 0, 01:29:47.811 "r_mbytes_per_sec": 0, 01:29:47.811 "w_mbytes_per_sec": 0 01:29:47.811 }, 01:29:47.811 "claimed": false, 01:29:47.811 "zoned": false, 01:29:47.811 "supported_io_types": { 01:29:47.811 "read": true, 01:29:47.811 "write": true, 01:29:47.811 "unmap": false, 01:29:47.811 "flush": false, 01:29:47.811 "reset": true, 01:29:47.811 "nvme_admin": false, 01:29:47.811 "nvme_io": false, 01:29:47.811 "nvme_io_md": false, 01:29:47.811 "write_zeroes": true, 01:29:47.811 "zcopy": false, 01:29:47.811 "get_zone_info": false, 01:29:47.811 "zone_management": false, 01:29:47.811 "zone_append": false, 01:29:47.811 "compare": false, 01:29:47.811 "compare_and_write": false, 01:29:47.811 "abort": false, 01:29:47.811 "seek_hole": false, 01:29:47.811 "seek_data": false, 01:29:47.811 "copy": false, 01:29:47.811 "nvme_iov_md": false 01:29:47.811 }, 01:29:47.811 "driver_specific": { 01:29:47.811 "raid": { 01:29:47.811 "uuid": "53975deb-c84e-4f6a-8041-ef78747ec880", 01:29:47.811 "strip_size_kb": 64, 01:29:47.811 "state": "online", 01:29:47.811 "raid_level": "raid5f", 01:29:47.811 "superblock": true, 01:29:47.811 "num_base_bdevs": 3, 01:29:47.811 "num_base_bdevs_discovered": 3, 01:29:47.811 "num_base_bdevs_operational": 3, 01:29:47.811 "base_bdevs_list": [ 01:29:47.811 { 01:29:47.811 "name": "BaseBdev1", 01:29:47.811 "uuid": "d106cb9d-6dee-4306-8edf-e955d6573442", 01:29:47.811 "is_configured": true, 01:29:47.811 "data_offset": 2048, 01:29:47.811 "data_size": 63488 01:29:47.811 }, 01:29:47.811 { 01:29:47.811 "name": "BaseBdev2", 01:29:47.811 "uuid": "db7d0980-4845-4098-91cc-6ccffa5a63a5", 01:29:47.811 "is_configured": true, 01:29:47.811 "data_offset": 2048, 01:29:47.811 "data_size": 63488 01:29:47.811 }, 01:29:47.811 { 01:29:47.811 "name": "BaseBdev3", 01:29:47.811 "uuid": "e434e316-1039-462b-bb0a-906632494500", 01:29:47.811 "is_configured": true, 01:29:47.811 "data_offset": 2048, 01:29:47.811 "data_size": 63488 01:29:47.811 } 01:29:47.811 ] 01:29:47.811 } 01:29:47.811 } 01:29:47.811 }' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:29:47.811 BaseBdev2 01:29:47.811 BaseBdev3' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:47.811 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.069 [2024-12-09 05:24:39.548586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.069 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.327 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:48.327 "name": "Existed_Raid", 01:29:48.327 "uuid": "53975deb-c84e-4f6a-8041-ef78747ec880", 01:29:48.327 "strip_size_kb": 64, 01:29:48.327 "state": "online", 01:29:48.327 "raid_level": "raid5f", 01:29:48.327 "superblock": true, 01:29:48.327 "num_base_bdevs": 3, 01:29:48.327 "num_base_bdevs_discovered": 2, 01:29:48.327 "num_base_bdevs_operational": 2, 01:29:48.327 "base_bdevs_list": [ 01:29:48.327 { 01:29:48.327 "name": null, 01:29:48.327 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:48.328 "is_configured": false, 01:29:48.328 "data_offset": 0, 01:29:48.328 "data_size": 63488 01:29:48.328 }, 01:29:48.328 { 01:29:48.328 "name": "BaseBdev2", 01:29:48.328 "uuid": "db7d0980-4845-4098-91cc-6ccffa5a63a5", 01:29:48.328 "is_configured": true, 01:29:48.328 "data_offset": 2048, 01:29:48.328 "data_size": 63488 01:29:48.328 }, 01:29:48.328 { 01:29:48.328 "name": "BaseBdev3", 01:29:48.328 "uuid": "e434e316-1039-462b-bb0a-906632494500", 01:29:48.328 "is_configured": true, 01:29:48.328 "data_offset": 2048, 01:29:48.328 "data_size": 63488 01:29:48.328 } 01:29:48.328 ] 01:29:48.328 }' 01:29:48.328 05:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:48.328 05:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.586 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.586 [2024-12-09 05:24:40.198811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:29:48.903 [2024-12-09 05:24:40.199152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:48.903 [2024-12-09 05:24:40.275204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.903 [2024-12-09 05:24:40.335226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:29:48.903 [2024-12-09 05:24:40.335278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:48.903 BaseBdev2 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:48.903 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.161 [ 01:29:49.161 { 01:29:49.161 "name": "BaseBdev2", 01:29:49.161 "aliases": [ 01:29:49.161 "8d9b1370-2364-46c9-b69d-ff8b793e8823" 01:29:49.161 ], 01:29:49.161 "product_name": "Malloc disk", 01:29:49.161 "block_size": 512, 01:29:49.161 "num_blocks": 65536, 01:29:49.161 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:49.161 "assigned_rate_limits": { 01:29:49.161 "rw_ios_per_sec": 0, 01:29:49.161 "rw_mbytes_per_sec": 0, 01:29:49.161 "r_mbytes_per_sec": 0, 01:29:49.161 "w_mbytes_per_sec": 0 01:29:49.161 }, 01:29:49.161 "claimed": false, 01:29:49.161 "zoned": false, 01:29:49.161 "supported_io_types": { 01:29:49.161 "read": true, 01:29:49.161 "write": true, 01:29:49.161 "unmap": true, 01:29:49.161 "flush": true, 01:29:49.161 "reset": true, 01:29:49.161 "nvme_admin": false, 01:29:49.161 "nvme_io": false, 01:29:49.161 "nvme_io_md": false, 01:29:49.161 "write_zeroes": true, 01:29:49.161 "zcopy": true, 01:29:49.161 "get_zone_info": false, 01:29:49.161 "zone_management": false, 01:29:49.161 "zone_append": false, 01:29:49.161 "compare": false, 01:29:49.161 "compare_and_write": false, 01:29:49.161 "abort": true, 01:29:49.161 "seek_hole": false, 01:29:49.161 "seek_data": false, 01:29:49.161 "copy": true, 01:29:49.161 "nvme_iov_md": false 01:29:49.161 }, 01:29:49.161 "memory_domains": [ 01:29:49.161 { 01:29:49.161 "dma_device_id": "system", 01:29:49.161 "dma_device_type": 1 01:29:49.161 }, 01:29:49.161 { 01:29:49.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:49.161 "dma_device_type": 2 01:29:49.161 } 01:29:49.161 ], 01:29:49.161 "driver_specific": {} 01:29:49.161 } 01:29:49.161 ] 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.161 BaseBdev3 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:49.161 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.162 [ 01:29:49.162 { 01:29:49.162 "name": "BaseBdev3", 01:29:49.162 "aliases": [ 01:29:49.162 "ff54a0cf-3501-4d8c-9223-9fe54a3012b5" 01:29:49.162 ], 01:29:49.162 "product_name": "Malloc disk", 01:29:49.162 "block_size": 512, 01:29:49.162 "num_blocks": 65536, 01:29:49.162 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:49.162 "assigned_rate_limits": { 01:29:49.162 "rw_ios_per_sec": 0, 01:29:49.162 "rw_mbytes_per_sec": 0, 01:29:49.162 "r_mbytes_per_sec": 0, 01:29:49.162 "w_mbytes_per_sec": 0 01:29:49.162 }, 01:29:49.162 "claimed": false, 01:29:49.162 "zoned": false, 01:29:49.162 "supported_io_types": { 01:29:49.162 "read": true, 01:29:49.162 "write": true, 01:29:49.162 "unmap": true, 01:29:49.162 "flush": true, 01:29:49.162 "reset": true, 01:29:49.162 "nvme_admin": false, 01:29:49.162 "nvme_io": false, 01:29:49.162 "nvme_io_md": false, 01:29:49.162 "write_zeroes": true, 01:29:49.162 "zcopy": true, 01:29:49.162 "get_zone_info": false, 01:29:49.162 "zone_management": false, 01:29:49.162 "zone_append": false, 01:29:49.162 "compare": false, 01:29:49.162 "compare_and_write": false, 01:29:49.162 "abort": true, 01:29:49.162 "seek_hole": false, 01:29:49.162 "seek_data": false, 01:29:49.162 "copy": true, 01:29:49.162 "nvme_iov_md": false 01:29:49.162 }, 01:29:49.162 "memory_domains": [ 01:29:49.162 { 01:29:49.162 "dma_device_id": "system", 01:29:49.162 "dma_device_type": 1 01:29:49.162 }, 01:29:49.162 { 01:29:49.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:49.162 "dma_device_type": 2 01:29:49.162 } 01:29:49.162 ], 01:29:49.162 "driver_specific": {} 01:29:49.162 } 01:29:49.162 ] 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.162 [2024-12-09 05:24:40.624199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:29:49.162 [2024-12-09 05:24:40.624433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:29:49.162 [2024-12-09 05:24:40.624573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:49.162 [2024-12-09 05:24:40.627205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:49.162 "name": "Existed_Raid", 01:29:49.162 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:49.162 "strip_size_kb": 64, 01:29:49.162 "state": "configuring", 01:29:49.162 "raid_level": "raid5f", 01:29:49.162 "superblock": true, 01:29:49.162 "num_base_bdevs": 3, 01:29:49.162 "num_base_bdevs_discovered": 2, 01:29:49.162 "num_base_bdevs_operational": 3, 01:29:49.162 "base_bdevs_list": [ 01:29:49.162 { 01:29:49.162 "name": "BaseBdev1", 01:29:49.162 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:49.162 "is_configured": false, 01:29:49.162 "data_offset": 0, 01:29:49.162 "data_size": 0 01:29:49.162 }, 01:29:49.162 { 01:29:49.162 "name": "BaseBdev2", 01:29:49.162 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:49.162 "is_configured": true, 01:29:49.162 "data_offset": 2048, 01:29:49.162 "data_size": 63488 01:29:49.162 }, 01:29:49.162 { 01:29:49.162 "name": "BaseBdev3", 01:29:49.162 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:49.162 "is_configured": true, 01:29:49.162 "data_offset": 2048, 01:29:49.162 "data_size": 63488 01:29:49.162 } 01:29:49.162 ] 01:29:49.162 }' 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:49.162 05:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.749 [2024-12-09 05:24:41.152466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:49.749 "name": "Existed_Raid", 01:29:49.749 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:49.749 "strip_size_kb": 64, 01:29:49.749 "state": "configuring", 01:29:49.749 "raid_level": "raid5f", 01:29:49.749 "superblock": true, 01:29:49.749 "num_base_bdevs": 3, 01:29:49.749 "num_base_bdevs_discovered": 1, 01:29:49.749 "num_base_bdevs_operational": 3, 01:29:49.749 "base_bdevs_list": [ 01:29:49.749 { 01:29:49.749 "name": "BaseBdev1", 01:29:49.749 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:49.749 "is_configured": false, 01:29:49.749 "data_offset": 0, 01:29:49.749 "data_size": 0 01:29:49.749 }, 01:29:49.749 { 01:29:49.749 "name": null, 01:29:49.749 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:49.749 "is_configured": false, 01:29:49.749 "data_offset": 0, 01:29:49.749 "data_size": 63488 01:29:49.749 }, 01:29:49.749 { 01:29:49.749 "name": "BaseBdev3", 01:29:49.749 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:49.749 "is_configured": true, 01:29:49.749 "data_offset": 2048, 01:29:49.749 "data_size": 63488 01:29:49.749 } 01:29:49.749 ] 01:29:49.749 }' 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:49.749 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.315 [2024-12-09 05:24:41.763985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:29:50.315 BaseBdev1 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.315 [ 01:29:50.315 { 01:29:50.315 "name": "BaseBdev1", 01:29:50.315 "aliases": [ 01:29:50.315 "04136c55-c164-4ad1-9004-7231bb886b8e" 01:29:50.315 ], 01:29:50.315 "product_name": "Malloc disk", 01:29:50.315 "block_size": 512, 01:29:50.315 "num_blocks": 65536, 01:29:50.315 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:50.315 "assigned_rate_limits": { 01:29:50.315 "rw_ios_per_sec": 0, 01:29:50.315 "rw_mbytes_per_sec": 0, 01:29:50.315 "r_mbytes_per_sec": 0, 01:29:50.315 "w_mbytes_per_sec": 0 01:29:50.315 }, 01:29:50.315 "claimed": true, 01:29:50.315 "claim_type": "exclusive_write", 01:29:50.315 "zoned": false, 01:29:50.315 "supported_io_types": { 01:29:50.315 "read": true, 01:29:50.315 "write": true, 01:29:50.315 "unmap": true, 01:29:50.315 "flush": true, 01:29:50.315 "reset": true, 01:29:50.315 "nvme_admin": false, 01:29:50.315 "nvme_io": false, 01:29:50.315 "nvme_io_md": false, 01:29:50.315 "write_zeroes": true, 01:29:50.315 "zcopy": true, 01:29:50.315 "get_zone_info": false, 01:29:50.315 "zone_management": false, 01:29:50.315 "zone_append": false, 01:29:50.315 "compare": false, 01:29:50.315 "compare_and_write": false, 01:29:50.315 "abort": true, 01:29:50.315 "seek_hole": false, 01:29:50.315 "seek_data": false, 01:29:50.315 "copy": true, 01:29:50.315 "nvme_iov_md": false 01:29:50.315 }, 01:29:50.315 "memory_domains": [ 01:29:50.315 { 01:29:50.315 "dma_device_id": "system", 01:29:50.315 "dma_device_type": 1 01:29:50.315 }, 01:29:50.315 { 01:29:50.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:50.315 "dma_device_type": 2 01:29:50.315 } 01:29:50.315 ], 01:29:50.315 "driver_specific": {} 01:29:50.315 } 01:29:50.315 ] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:50.315 "name": "Existed_Raid", 01:29:50.315 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:50.315 "strip_size_kb": 64, 01:29:50.315 "state": "configuring", 01:29:50.315 "raid_level": "raid5f", 01:29:50.315 "superblock": true, 01:29:50.315 "num_base_bdevs": 3, 01:29:50.315 "num_base_bdevs_discovered": 2, 01:29:50.315 "num_base_bdevs_operational": 3, 01:29:50.315 "base_bdevs_list": [ 01:29:50.315 { 01:29:50.315 "name": "BaseBdev1", 01:29:50.315 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:50.315 "is_configured": true, 01:29:50.315 "data_offset": 2048, 01:29:50.315 "data_size": 63488 01:29:50.315 }, 01:29:50.315 { 01:29:50.315 "name": null, 01:29:50.315 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:50.315 "is_configured": false, 01:29:50.315 "data_offset": 0, 01:29:50.315 "data_size": 63488 01:29:50.315 }, 01:29:50.315 { 01:29:50.315 "name": "BaseBdev3", 01:29:50.315 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:50.315 "is_configured": true, 01:29:50.315 "data_offset": 2048, 01:29:50.315 "data_size": 63488 01:29:50.315 } 01:29:50.315 ] 01:29:50.315 }' 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:50.315 05:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.881 [2024-12-09 05:24:42.360180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:50.881 "name": "Existed_Raid", 01:29:50.881 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:50.881 "strip_size_kb": 64, 01:29:50.881 "state": "configuring", 01:29:50.881 "raid_level": "raid5f", 01:29:50.881 "superblock": true, 01:29:50.881 "num_base_bdevs": 3, 01:29:50.881 "num_base_bdevs_discovered": 1, 01:29:50.881 "num_base_bdevs_operational": 3, 01:29:50.881 "base_bdevs_list": [ 01:29:50.881 { 01:29:50.881 "name": "BaseBdev1", 01:29:50.881 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:50.881 "is_configured": true, 01:29:50.881 "data_offset": 2048, 01:29:50.881 "data_size": 63488 01:29:50.881 }, 01:29:50.881 { 01:29:50.881 "name": null, 01:29:50.881 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:50.881 "is_configured": false, 01:29:50.881 "data_offset": 0, 01:29:50.881 "data_size": 63488 01:29:50.881 }, 01:29:50.881 { 01:29:50.881 "name": null, 01:29:50.881 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:50.881 "is_configured": false, 01:29:50.881 "data_offset": 0, 01:29:50.881 "data_size": 63488 01:29:50.881 } 01:29:50.881 ] 01:29:50.881 }' 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:50.881 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:51.447 [2024-12-09 05:24:42.952357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:51.447 05:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.447 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:51.447 "name": "Existed_Raid", 01:29:51.447 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:51.447 "strip_size_kb": 64, 01:29:51.447 "state": "configuring", 01:29:51.447 "raid_level": "raid5f", 01:29:51.447 "superblock": true, 01:29:51.447 "num_base_bdevs": 3, 01:29:51.447 "num_base_bdevs_discovered": 2, 01:29:51.447 "num_base_bdevs_operational": 3, 01:29:51.447 "base_bdevs_list": [ 01:29:51.447 { 01:29:51.447 "name": "BaseBdev1", 01:29:51.447 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:51.448 "is_configured": true, 01:29:51.448 "data_offset": 2048, 01:29:51.448 "data_size": 63488 01:29:51.448 }, 01:29:51.448 { 01:29:51.448 "name": null, 01:29:51.448 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:51.448 "is_configured": false, 01:29:51.448 "data_offset": 0, 01:29:51.448 "data_size": 63488 01:29:51.448 }, 01:29:51.448 { 01:29:51.448 "name": "BaseBdev3", 01:29:51.448 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:51.448 "is_configured": true, 01:29:51.448 "data_offset": 2048, 01:29:51.448 "data_size": 63488 01:29:51.448 } 01:29:51.448 ] 01:29:51.448 }' 01:29:51.448 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:51.448 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.015 [2024-12-09 05:24:43.540570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:52.015 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.273 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.273 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:52.273 "name": "Existed_Raid", 01:29:52.273 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:52.273 "strip_size_kb": 64, 01:29:52.273 "state": "configuring", 01:29:52.273 "raid_level": "raid5f", 01:29:52.273 "superblock": true, 01:29:52.273 "num_base_bdevs": 3, 01:29:52.273 "num_base_bdevs_discovered": 1, 01:29:52.273 "num_base_bdevs_operational": 3, 01:29:52.273 "base_bdevs_list": [ 01:29:52.273 { 01:29:52.273 "name": null, 01:29:52.273 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:52.273 "is_configured": false, 01:29:52.273 "data_offset": 0, 01:29:52.273 "data_size": 63488 01:29:52.273 }, 01:29:52.273 { 01:29:52.273 "name": null, 01:29:52.273 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:52.273 "is_configured": false, 01:29:52.273 "data_offset": 0, 01:29:52.273 "data_size": 63488 01:29:52.273 }, 01:29:52.273 { 01:29:52.273 "name": "BaseBdev3", 01:29:52.273 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:52.273 "is_configured": true, 01:29:52.273 "data_offset": 2048, 01:29:52.273 "data_size": 63488 01:29:52.273 } 01:29:52.273 ] 01:29:52.273 }' 01:29:52.273 05:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:52.273 05:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.840 [2024-12-09 05:24:44.214969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.840 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:52.840 "name": "Existed_Raid", 01:29:52.840 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:52.840 "strip_size_kb": 64, 01:29:52.840 "state": "configuring", 01:29:52.840 "raid_level": "raid5f", 01:29:52.840 "superblock": true, 01:29:52.840 "num_base_bdevs": 3, 01:29:52.840 "num_base_bdevs_discovered": 2, 01:29:52.840 "num_base_bdevs_operational": 3, 01:29:52.840 "base_bdevs_list": [ 01:29:52.840 { 01:29:52.840 "name": null, 01:29:52.840 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:52.840 "is_configured": false, 01:29:52.840 "data_offset": 0, 01:29:52.840 "data_size": 63488 01:29:52.840 }, 01:29:52.840 { 01:29:52.840 "name": "BaseBdev2", 01:29:52.840 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:52.841 "is_configured": true, 01:29:52.841 "data_offset": 2048, 01:29:52.841 "data_size": 63488 01:29:52.841 }, 01:29:52.841 { 01:29:52.841 "name": "BaseBdev3", 01:29:52.841 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:52.841 "is_configured": true, 01:29:52.841 "data_offset": 2048, 01:29:52.841 "data_size": 63488 01:29:52.841 } 01:29:52.841 ] 01:29:52.841 }' 01:29:52.841 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:52.841 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04136c55-c164-4ad1-9004-7231bb886b8e 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.408 [2024-12-09 05:24:44.879545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:29:53.408 [2024-12-09 05:24:44.879830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:29:53.408 [2024-12-09 05:24:44.879853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:29:53.408 NewBaseBdev 01:29:53.408 [2024-12-09 05:24:44.880151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.408 [2024-12-09 05:24:44.885083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:29:53.408 [2024-12-09 05:24:44.885107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:29:53.408 [2024-12-09 05:24:44.885405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.408 [ 01:29:53.408 { 01:29:53.408 "name": "NewBaseBdev", 01:29:53.408 "aliases": [ 01:29:53.408 "04136c55-c164-4ad1-9004-7231bb886b8e" 01:29:53.408 ], 01:29:53.408 "product_name": "Malloc disk", 01:29:53.408 "block_size": 512, 01:29:53.408 "num_blocks": 65536, 01:29:53.408 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:53.408 "assigned_rate_limits": { 01:29:53.408 "rw_ios_per_sec": 0, 01:29:53.408 "rw_mbytes_per_sec": 0, 01:29:53.408 "r_mbytes_per_sec": 0, 01:29:53.408 "w_mbytes_per_sec": 0 01:29:53.408 }, 01:29:53.408 "claimed": true, 01:29:53.408 "claim_type": "exclusive_write", 01:29:53.408 "zoned": false, 01:29:53.408 "supported_io_types": { 01:29:53.408 "read": true, 01:29:53.408 "write": true, 01:29:53.408 "unmap": true, 01:29:53.408 "flush": true, 01:29:53.408 "reset": true, 01:29:53.408 "nvme_admin": false, 01:29:53.408 "nvme_io": false, 01:29:53.408 "nvme_io_md": false, 01:29:53.408 "write_zeroes": true, 01:29:53.408 "zcopy": true, 01:29:53.408 "get_zone_info": false, 01:29:53.408 "zone_management": false, 01:29:53.408 "zone_append": false, 01:29:53.408 "compare": false, 01:29:53.408 "compare_and_write": false, 01:29:53.408 "abort": true, 01:29:53.408 "seek_hole": false, 01:29:53.408 "seek_data": false, 01:29:53.408 "copy": true, 01:29:53.408 "nvme_iov_md": false 01:29:53.408 }, 01:29:53.408 "memory_domains": [ 01:29:53.408 { 01:29:53.408 "dma_device_id": "system", 01:29:53.408 "dma_device_type": 1 01:29:53.408 }, 01:29:53.408 { 01:29:53.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:29:53.408 "dma_device_type": 2 01:29:53.408 } 01:29:53.408 ], 01:29:53.408 "driver_specific": {} 01:29:53.408 } 01:29:53.408 ] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:53.408 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:53.409 "name": "Existed_Raid", 01:29:53.409 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:53.409 "strip_size_kb": 64, 01:29:53.409 "state": "online", 01:29:53.409 "raid_level": "raid5f", 01:29:53.409 "superblock": true, 01:29:53.409 "num_base_bdevs": 3, 01:29:53.409 "num_base_bdevs_discovered": 3, 01:29:53.409 "num_base_bdevs_operational": 3, 01:29:53.409 "base_bdevs_list": [ 01:29:53.409 { 01:29:53.409 "name": "NewBaseBdev", 01:29:53.409 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:53.409 "is_configured": true, 01:29:53.409 "data_offset": 2048, 01:29:53.409 "data_size": 63488 01:29:53.409 }, 01:29:53.409 { 01:29:53.409 "name": "BaseBdev2", 01:29:53.409 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:53.409 "is_configured": true, 01:29:53.409 "data_offset": 2048, 01:29:53.409 "data_size": 63488 01:29:53.409 }, 01:29:53.409 { 01:29:53.409 "name": "BaseBdev3", 01:29:53.409 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:53.409 "is_configured": true, 01:29:53.409 "data_offset": 2048, 01:29:53.409 "data_size": 63488 01:29:53.409 } 01:29:53.409 ] 01:29:53.409 }' 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:53.409 05:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:29:53.975 [2024-12-09 05:24:45.443677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:29:53.975 "name": "Existed_Raid", 01:29:53.975 "aliases": [ 01:29:53.975 "8294ca87-9fb4-45fd-8bc6-dd2c159944f1" 01:29:53.975 ], 01:29:53.975 "product_name": "Raid Volume", 01:29:53.975 "block_size": 512, 01:29:53.975 "num_blocks": 126976, 01:29:53.975 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:53.975 "assigned_rate_limits": { 01:29:53.975 "rw_ios_per_sec": 0, 01:29:53.975 "rw_mbytes_per_sec": 0, 01:29:53.975 "r_mbytes_per_sec": 0, 01:29:53.975 "w_mbytes_per_sec": 0 01:29:53.975 }, 01:29:53.975 "claimed": false, 01:29:53.975 "zoned": false, 01:29:53.975 "supported_io_types": { 01:29:53.975 "read": true, 01:29:53.975 "write": true, 01:29:53.975 "unmap": false, 01:29:53.975 "flush": false, 01:29:53.975 "reset": true, 01:29:53.975 "nvme_admin": false, 01:29:53.975 "nvme_io": false, 01:29:53.975 "nvme_io_md": false, 01:29:53.975 "write_zeroes": true, 01:29:53.975 "zcopy": false, 01:29:53.975 "get_zone_info": false, 01:29:53.975 "zone_management": false, 01:29:53.975 "zone_append": false, 01:29:53.975 "compare": false, 01:29:53.975 "compare_and_write": false, 01:29:53.975 "abort": false, 01:29:53.975 "seek_hole": false, 01:29:53.975 "seek_data": false, 01:29:53.975 "copy": false, 01:29:53.975 "nvme_iov_md": false 01:29:53.975 }, 01:29:53.975 "driver_specific": { 01:29:53.975 "raid": { 01:29:53.975 "uuid": "8294ca87-9fb4-45fd-8bc6-dd2c159944f1", 01:29:53.975 "strip_size_kb": 64, 01:29:53.975 "state": "online", 01:29:53.975 "raid_level": "raid5f", 01:29:53.975 "superblock": true, 01:29:53.975 "num_base_bdevs": 3, 01:29:53.975 "num_base_bdevs_discovered": 3, 01:29:53.975 "num_base_bdevs_operational": 3, 01:29:53.975 "base_bdevs_list": [ 01:29:53.975 { 01:29:53.975 "name": "NewBaseBdev", 01:29:53.975 "uuid": "04136c55-c164-4ad1-9004-7231bb886b8e", 01:29:53.975 "is_configured": true, 01:29:53.975 "data_offset": 2048, 01:29:53.975 "data_size": 63488 01:29:53.975 }, 01:29:53.975 { 01:29:53.975 "name": "BaseBdev2", 01:29:53.975 "uuid": "8d9b1370-2364-46c9-b69d-ff8b793e8823", 01:29:53.975 "is_configured": true, 01:29:53.975 "data_offset": 2048, 01:29:53.975 "data_size": 63488 01:29:53.975 }, 01:29:53.975 { 01:29:53.975 "name": "BaseBdev3", 01:29:53.975 "uuid": "ff54a0cf-3501-4d8c-9223-9fe54a3012b5", 01:29:53.975 "is_configured": true, 01:29:53.975 "data_offset": 2048, 01:29:53.975 "data_size": 63488 01:29:53.975 } 01:29:53.975 ] 01:29:53.975 } 01:29:53.975 } 01:29:53.975 }' 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:29:53.975 BaseBdev2 01:29:53.975 BaseBdev3' 01:29:53.975 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:54.234 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:29:54.234 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:54.234 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:29:54.234 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:54.234 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:54.234 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:54.235 [2024-12-09 05:24:45.767519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:29:54.235 [2024-12-09 05:24:45.768689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:54.235 [2024-12-09 05:24:45.768803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:54.235 [2024-12-09 05:24:45.769191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:29:54.235 [2024-12-09 05:24:45.769212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80796 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80796 ']' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80796 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80796 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:54.235 killing process with pid 80796 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80796' 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80796 01:29:54.235 [2024-12-09 05:24:45.809238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:29:54.235 05:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80796 01:29:54.492 [2024-12-09 05:24:46.063079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:29:55.864 05:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:29:55.864 ************************************ 01:29:55.864 END TEST raid5f_state_function_test_sb 01:29:55.864 ************************************ 01:29:55.864 01:29:55.864 real 0m11.821s 01:29:55.864 user 0m19.668s 01:29:55.864 sys 0m1.656s 01:29:55.864 05:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:55.864 05:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:29:55.864 05:24:47 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 01:29:55.864 05:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:29:55.864 05:24:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:55.864 05:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:29:55.864 ************************************ 01:29:55.864 START TEST raid5f_superblock_test 01:29:55.864 ************************************ 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81422 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81422 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81422 ']' 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:55.864 05:24:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:55.864 [2024-12-09 05:24:47.274809] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:55.864 [2024-12-09 05:24:47.275284] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81422 ] 01:29:55.864 [2024-12-09 05:24:47.454167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:56.123 [2024-12-09 05:24:47.570504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:56.381 [2024-12-09 05:24:47.765248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:56.381 [2024-12-09 05:24:47.765286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.949 malloc1 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.949 [2024-12-09 05:24:48.329440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:29:56.949 [2024-12-09 05:24:48.329507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:56.949 [2024-12-09 05:24:48.329546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:29:56.949 [2024-12-09 05:24:48.329569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:56.949 [2024-12-09 05:24:48.332356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:56.949 [2024-12-09 05:24:48.332430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:29:56.949 pt1 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.949 malloc2 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.949 [2024-12-09 05:24:48.378343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:29:56.949 [2024-12-09 05:24:48.378442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:56.949 [2024-12-09 05:24:48.378480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:29:56.949 [2024-12-09 05:24:48.378495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:56.949 [2024-12-09 05:24:48.381306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:56.949 [2024-12-09 05:24:48.381348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:29:56.949 pt2 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:29:56.949 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.950 malloc3 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.950 [2024-12-09 05:24:48.439954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:29:56.950 [2024-12-09 05:24:48.440196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:56.950 [2024-12-09 05:24:48.440242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:29:56.950 [2024-12-09 05:24:48.440258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:56.950 [2024-12-09 05:24:48.443072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:56.950 [2024-12-09 05:24:48.443115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:29:56.950 pt3 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.950 [2024-12-09 05:24:48.452113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:29:56.950 [2024-12-09 05:24:48.454603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:29:56.950 [2024-12-09 05:24:48.454894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:29:56.950 [2024-12-09 05:24:48.455121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:29:56.950 [2024-12-09 05:24:48.455166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:29:56.950 [2024-12-09 05:24:48.455511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:29:56.950 [2024-12-09 05:24:48.460584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:29:56.950 [2024-12-09 05:24:48.460748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:29:56.950 [2024-12-09 05:24:48.461115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:56.950 "name": "raid_bdev1", 01:29:56.950 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:56.950 "strip_size_kb": 64, 01:29:56.950 "state": "online", 01:29:56.950 "raid_level": "raid5f", 01:29:56.950 "superblock": true, 01:29:56.950 "num_base_bdevs": 3, 01:29:56.950 "num_base_bdevs_discovered": 3, 01:29:56.950 "num_base_bdevs_operational": 3, 01:29:56.950 "base_bdevs_list": [ 01:29:56.950 { 01:29:56.950 "name": "pt1", 01:29:56.950 "uuid": "00000000-0000-0000-0000-000000000001", 01:29:56.950 "is_configured": true, 01:29:56.950 "data_offset": 2048, 01:29:56.950 "data_size": 63488 01:29:56.950 }, 01:29:56.950 { 01:29:56.950 "name": "pt2", 01:29:56.950 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:56.950 "is_configured": true, 01:29:56.950 "data_offset": 2048, 01:29:56.950 "data_size": 63488 01:29:56.950 }, 01:29:56.950 { 01:29:56.950 "name": "pt3", 01:29:56.950 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:56.950 "is_configured": true, 01:29:56.950 "data_offset": 2048, 01:29:56.950 "data_size": 63488 01:29:56.950 } 01:29:56.950 ] 01:29:56.950 }' 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:56.950 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.515 [2024-12-09 05:24:48.943572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:29:57.515 "name": "raid_bdev1", 01:29:57.515 "aliases": [ 01:29:57.515 "7e3c7383-afe0-495d-bc8f-03451a28431e" 01:29:57.515 ], 01:29:57.515 "product_name": "Raid Volume", 01:29:57.515 "block_size": 512, 01:29:57.515 "num_blocks": 126976, 01:29:57.515 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:57.515 "assigned_rate_limits": { 01:29:57.515 "rw_ios_per_sec": 0, 01:29:57.515 "rw_mbytes_per_sec": 0, 01:29:57.515 "r_mbytes_per_sec": 0, 01:29:57.515 "w_mbytes_per_sec": 0 01:29:57.515 }, 01:29:57.515 "claimed": false, 01:29:57.515 "zoned": false, 01:29:57.515 "supported_io_types": { 01:29:57.515 "read": true, 01:29:57.515 "write": true, 01:29:57.515 "unmap": false, 01:29:57.515 "flush": false, 01:29:57.515 "reset": true, 01:29:57.515 "nvme_admin": false, 01:29:57.515 "nvme_io": false, 01:29:57.515 "nvme_io_md": false, 01:29:57.515 "write_zeroes": true, 01:29:57.515 "zcopy": false, 01:29:57.515 "get_zone_info": false, 01:29:57.515 "zone_management": false, 01:29:57.515 "zone_append": false, 01:29:57.515 "compare": false, 01:29:57.515 "compare_and_write": false, 01:29:57.515 "abort": false, 01:29:57.515 "seek_hole": false, 01:29:57.515 "seek_data": false, 01:29:57.515 "copy": false, 01:29:57.515 "nvme_iov_md": false 01:29:57.515 }, 01:29:57.515 "driver_specific": { 01:29:57.515 "raid": { 01:29:57.515 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:57.515 "strip_size_kb": 64, 01:29:57.515 "state": "online", 01:29:57.515 "raid_level": "raid5f", 01:29:57.515 "superblock": true, 01:29:57.515 "num_base_bdevs": 3, 01:29:57.515 "num_base_bdevs_discovered": 3, 01:29:57.515 "num_base_bdevs_operational": 3, 01:29:57.515 "base_bdevs_list": [ 01:29:57.515 { 01:29:57.515 "name": "pt1", 01:29:57.515 "uuid": "00000000-0000-0000-0000-000000000001", 01:29:57.515 "is_configured": true, 01:29:57.515 "data_offset": 2048, 01:29:57.515 "data_size": 63488 01:29:57.515 }, 01:29:57.515 { 01:29:57.515 "name": "pt2", 01:29:57.515 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:57.515 "is_configured": true, 01:29:57.515 "data_offset": 2048, 01:29:57.515 "data_size": 63488 01:29:57.515 }, 01:29:57.515 { 01:29:57.515 "name": "pt3", 01:29:57.515 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:57.515 "is_configured": true, 01:29:57.515 "data_offset": 2048, 01:29:57.515 "data_size": 63488 01:29:57.515 } 01:29:57.515 ] 01:29:57.515 } 01:29:57.515 } 01:29:57.515 }' 01:29:57.515 05:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:29:57.515 pt2 01:29:57.515 pt3' 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:57.515 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:29:57.773 [2024-12-09 05:24:49.251589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7e3c7383-afe0-495d-bc8f-03451a28431e 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7e3c7383-afe0-495d-bc8f-03451a28431e ']' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 [2024-12-09 05:24:49.303416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:29:57.773 [2024-12-09 05:24:49.303449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:29:57.773 [2024-12-09 05:24:49.303531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:29:57.773 [2024-12-09 05:24:49.303629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:29:57.773 [2024-12-09 05:24:49.303646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:57.773 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.031 [2024-12-09 05:24:49.459514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:29:58.031 [2024-12-09 05:24:49.462031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:29:58.031 [2024-12-09 05:24:49.462168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:29:58.031 [2024-12-09 05:24:49.462242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:29:58.031 [2024-12-09 05:24:49.462353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:29:58.031 [2024-12-09 05:24:49.462399] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:29:58.031 [2024-12-09 05:24:49.462444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:29:58.031 [2024-12-09 05:24:49.462459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:29:58.031 request: 01:29:58.031 { 01:29:58.031 "name": "raid_bdev1", 01:29:58.031 "raid_level": "raid5f", 01:29:58.031 "base_bdevs": [ 01:29:58.031 "malloc1", 01:29:58.031 "malloc2", 01:29:58.031 "malloc3" 01:29:58.031 ], 01:29:58.031 "strip_size_kb": 64, 01:29:58.031 "superblock": false, 01:29:58.031 "method": "bdev_raid_create", 01:29:58.031 "req_id": 1 01:29:58.031 } 01:29:58.031 Got JSON-RPC error response 01:29:58.031 response: 01:29:58.031 { 01:29:58.031 "code": -17, 01:29:58.031 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:29:58.031 } 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.031 [2024-12-09 05:24:49.523465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:29:58.031 [2024-12-09 05:24:49.523517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:58.031 [2024-12-09 05:24:49.523546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:29:58.031 [2024-12-09 05:24:49.523564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:58.031 [2024-12-09 05:24:49.526610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:58.031 [2024-12-09 05:24:49.526654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:29:58.031 [2024-12-09 05:24:49.526790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:29:58.031 [2024-12-09 05:24:49.526884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:29:58.031 pt1 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.031 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:58.031 "name": "raid_bdev1", 01:29:58.031 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:58.031 "strip_size_kb": 64, 01:29:58.031 "state": "configuring", 01:29:58.031 "raid_level": "raid5f", 01:29:58.031 "superblock": true, 01:29:58.031 "num_base_bdevs": 3, 01:29:58.031 "num_base_bdevs_discovered": 1, 01:29:58.031 "num_base_bdevs_operational": 3, 01:29:58.031 "base_bdevs_list": [ 01:29:58.031 { 01:29:58.031 "name": "pt1", 01:29:58.031 "uuid": "00000000-0000-0000-0000-000000000001", 01:29:58.031 "is_configured": true, 01:29:58.031 "data_offset": 2048, 01:29:58.031 "data_size": 63488 01:29:58.031 }, 01:29:58.031 { 01:29:58.031 "name": null, 01:29:58.031 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:58.031 "is_configured": false, 01:29:58.031 "data_offset": 2048, 01:29:58.031 "data_size": 63488 01:29:58.031 }, 01:29:58.031 { 01:29:58.031 "name": null, 01:29:58.031 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:58.031 "is_configured": false, 01:29:58.031 "data_offset": 2048, 01:29:58.031 "data_size": 63488 01:29:58.031 } 01:29:58.032 ] 01:29:58.032 }' 01:29:58.032 05:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:58.032 05:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.597 [2024-12-09 05:24:50.047759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:29:58.597 [2024-12-09 05:24:50.047862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:58.597 [2024-12-09 05:24:50.047906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 01:29:58.597 [2024-12-09 05:24:50.047920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:58.597 [2024-12-09 05:24:50.048515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:58.597 [2024-12-09 05:24:50.048559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:29:58.597 [2024-12-09 05:24:50.048670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:29:58.597 [2024-12-09 05:24:50.048711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:29:58.597 pt2 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.597 [2024-12-09 05:24:50.055693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:58.597 "name": "raid_bdev1", 01:29:58.597 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:58.597 "strip_size_kb": 64, 01:29:58.597 "state": "configuring", 01:29:58.597 "raid_level": "raid5f", 01:29:58.597 "superblock": true, 01:29:58.597 "num_base_bdevs": 3, 01:29:58.597 "num_base_bdevs_discovered": 1, 01:29:58.597 "num_base_bdevs_operational": 3, 01:29:58.597 "base_bdevs_list": [ 01:29:58.597 { 01:29:58.597 "name": "pt1", 01:29:58.597 "uuid": "00000000-0000-0000-0000-000000000001", 01:29:58.597 "is_configured": true, 01:29:58.597 "data_offset": 2048, 01:29:58.597 "data_size": 63488 01:29:58.597 }, 01:29:58.597 { 01:29:58.597 "name": null, 01:29:58.597 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:58.597 "is_configured": false, 01:29:58.597 "data_offset": 0, 01:29:58.597 "data_size": 63488 01:29:58.597 }, 01:29:58.597 { 01:29:58.597 "name": null, 01:29:58.597 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:58.597 "is_configured": false, 01:29:58.597 "data_offset": 2048, 01:29:58.597 "data_size": 63488 01:29:58.597 } 01:29:58.597 ] 01:29:58.597 }' 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:58.597 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.164 [2024-12-09 05:24:50.559853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:29:59.164 [2024-12-09 05:24:50.559955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:59.164 [2024-12-09 05:24:50.559981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 01:29:59.164 [2024-12-09 05:24:50.559997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:59.164 [2024-12-09 05:24:50.560622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:59.164 [2024-12-09 05:24:50.560665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:29:59.164 [2024-12-09 05:24:50.560775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:29:59.164 [2024-12-09 05:24:50.560812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:29:59.164 pt2 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.164 [2024-12-09 05:24:50.571837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:29:59.164 [2024-12-09 05:24:50.571922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:29:59.164 [2024-12-09 05:24:50.571942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:29:59.164 [2024-12-09 05:24:50.571957] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:29:59.164 [2024-12-09 05:24:50.572449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:29:59.164 [2024-12-09 05:24:50.572492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:29:59.164 [2024-12-09 05:24:50.572577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:29:59.164 [2024-12-09 05:24:50.572609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:29:59.164 [2024-12-09 05:24:50.572783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:29:59.164 [2024-12-09 05:24:50.572806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:29:59.164 [2024-12-09 05:24:50.573106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:29:59.164 [2024-12-09 05:24:50.578156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:29:59.164 [2024-12-09 05:24:50.578195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:29:59.164 pt3 01:29:59.164 [2024-12-09 05:24:50.578462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.164 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:59.164 "name": "raid_bdev1", 01:29:59.164 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:59.164 "strip_size_kb": 64, 01:29:59.164 "state": "online", 01:29:59.164 "raid_level": "raid5f", 01:29:59.164 "superblock": true, 01:29:59.164 "num_base_bdevs": 3, 01:29:59.164 "num_base_bdevs_discovered": 3, 01:29:59.164 "num_base_bdevs_operational": 3, 01:29:59.164 "base_bdevs_list": [ 01:29:59.164 { 01:29:59.164 "name": "pt1", 01:29:59.164 "uuid": "00000000-0000-0000-0000-000000000001", 01:29:59.164 "is_configured": true, 01:29:59.164 "data_offset": 2048, 01:29:59.164 "data_size": 63488 01:29:59.164 }, 01:29:59.164 { 01:29:59.164 "name": "pt2", 01:29:59.164 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:59.164 "is_configured": true, 01:29:59.164 "data_offset": 2048, 01:29:59.164 "data_size": 63488 01:29:59.164 }, 01:29:59.164 { 01:29:59.164 "name": "pt3", 01:29:59.164 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:59.164 "is_configured": true, 01:29:59.164 "data_offset": 2048, 01:29:59.164 "data_size": 63488 01:29:59.164 } 01:29:59.164 ] 01:29:59.164 }' 01:29:59.165 05:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:59.165 05:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.730 [2024-12-09 05:24:51.092639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.730 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:29:59.730 "name": "raid_bdev1", 01:29:59.730 "aliases": [ 01:29:59.730 "7e3c7383-afe0-495d-bc8f-03451a28431e" 01:29:59.730 ], 01:29:59.730 "product_name": "Raid Volume", 01:29:59.730 "block_size": 512, 01:29:59.730 "num_blocks": 126976, 01:29:59.731 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:59.731 "assigned_rate_limits": { 01:29:59.731 "rw_ios_per_sec": 0, 01:29:59.731 "rw_mbytes_per_sec": 0, 01:29:59.731 "r_mbytes_per_sec": 0, 01:29:59.731 "w_mbytes_per_sec": 0 01:29:59.731 }, 01:29:59.731 "claimed": false, 01:29:59.731 "zoned": false, 01:29:59.731 "supported_io_types": { 01:29:59.731 "read": true, 01:29:59.731 "write": true, 01:29:59.731 "unmap": false, 01:29:59.731 "flush": false, 01:29:59.731 "reset": true, 01:29:59.731 "nvme_admin": false, 01:29:59.731 "nvme_io": false, 01:29:59.731 "nvme_io_md": false, 01:29:59.731 "write_zeroes": true, 01:29:59.731 "zcopy": false, 01:29:59.731 "get_zone_info": false, 01:29:59.731 "zone_management": false, 01:29:59.731 "zone_append": false, 01:29:59.731 "compare": false, 01:29:59.731 "compare_and_write": false, 01:29:59.731 "abort": false, 01:29:59.731 "seek_hole": false, 01:29:59.731 "seek_data": false, 01:29:59.731 "copy": false, 01:29:59.731 "nvme_iov_md": false 01:29:59.731 }, 01:29:59.731 "driver_specific": { 01:29:59.731 "raid": { 01:29:59.731 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:59.731 "strip_size_kb": 64, 01:29:59.731 "state": "online", 01:29:59.731 "raid_level": "raid5f", 01:29:59.731 "superblock": true, 01:29:59.731 "num_base_bdevs": 3, 01:29:59.731 "num_base_bdevs_discovered": 3, 01:29:59.731 "num_base_bdevs_operational": 3, 01:29:59.731 "base_bdevs_list": [ 01:29:59.731 { 01:29:59.731 "name": "pt1", 01:29:59.731 "uuid": "00000000-0000-0000-0000-000000000001", 01:29:59.731 "is_configured": true, 01:29:59.731 "data_offset": 2048, 01:29:59.731 "data_size": 63488 01:29:59.731 }, 01:29:59.731 { 01:29:59.731 "name": "pt2", 01:29:59.731 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:59.731 "is_configured": true, 01:29:59.731 "data_offset": 2048, 01:29:59.731 "data_size": 63488 01:29:59.731 }, 01:29:59.731 { 01:29:59.731 "name": "pt3", 01:29:59.731 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:59.731 "is_configured": true, 01:29:59.731 "data_offset": 2048, 01:29:59.731 "data_size": 63488 01:29:59.731 } 01:29:59.731 ] 01:29:59.731 } 01:29:59.731 } 01:29:59.731 }' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:29:59.731 pt2 01:29:59.731 pt3' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.731 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:29:59.989 [2024-12-09 05:24:51.396675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7e3c7383-afe0-495d-bc8f-03451a28431e '!=' 7e3c7383-afe0-495d-bc8f-03451a28431e ']' 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.989 [2024-12-09 05:24:51.448509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.989 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:29:59.989 "name": "raid_bdev1", 01:29:59.990 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:29:59.990 "strip_size_kb": 64, 01:29:59.990 "state": "online", 01:29:59.990 "raid_level": "raid5f", 01:29:59.990 "superblock": true, 01:29:59.990 "num_base_bdevs": 3, 01:29:59.990 "num_base_bdevs_discovered": 2, 01:29:59.990 "num_base_bdevs_operational": 2, 01:29:59.990 "base_bdevs_list": [ 01:29:59.990 { 01:29:59.990 "name": null, 01:29:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 01:29:59.990 "is_configured": false, 01:29:59.990 "data_offset": 0, 01:29:59.990 "data_size": 63488 01:29:59.990 }, 01:29:59.990 { 01:29:59.990 "name": "pt2", 01:29:59.990 "uuid": "00000000-0000-0000-0000-000000000002", 01:29:59.990 "is_configured": true, 01:29:59.990 "data_offset": 2048, 01:29:59.990 "data_size": 63488 01:29:59.990 }, 01:29:59.990 { 01:29:59.990 "name": "pt3", 01:29:59.990 "uuid": "00000000-0000-0000-0000-000000000003", 01:29:59.990 "is_configured": true, 01:29:59.990 "data_offset": 2048, 01:29:59.990 "data_size": 63488 01:29:59.990 } 01:29:59.990 ] 01:29:59.990 }' 01:29:59.990 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:29:59.990 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.558 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.559 [2024-12-09 05:24:51.968622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:30:00.559 [2024-12-09 05:24:51.968659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:30:00.559 [2024-12-09 05:24:51.968788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:00.559 [2024-12-09 05:24:51.968858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:00.559 [2024-12-09 05:24:51.968878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:30:00.559 05:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.559 [2024-12-09 05:24:52.052626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:30:00.559 [2024-12-09 05:24:52.052703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:00.559 [2024-12-09 05:24:52.052743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 01:30:00.559 [2024-12-09 05:24:52.052759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:00.559 [2024-12-09 05:24:52.055786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:00.559 [2024-12-09 05:24:52.055837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:30:00.559 [2024-12-09 05:24:52.055949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:30:00.559 [2024-12-09 05:24:52.056040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:30:00.559 pt2 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:00.559 "name": "raid_bdev1", 01:30:00.559 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:30:00.559 "strip_size_kb": 64, 01:30:00.559 "state": "configuring", 01:30:00.559 "raid_level": "raid5f", 01:30:00.559 "superblock": true, 01:30:00.559 "num_base_bdevs": 3, 01:30:00.559 "num_base_bdevs_discovered": 1, 01:30:00.559 "num_base_bdevs_operational": 2, 01:30:00.559 "base_bdevs_list": [ 01:30:00.559 { 01:30:00.559 "name": null, 01:30:00.559 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:00.559 "is_configured": false, 01:30:00.559 "data_offset": 2048, 01:30:00.559 "data_size": 63488 01:30:00.559 }, 01:30:00.559 { 01:30:00.559 "name": "pt2", 01:30:00.559 "uuid": "00000000-0000-0000-0000-000000000002", 01:30:00.559 "is_configured": true, 01:30:00.559 "data_offset": 2048, 01:30:00.559 "data_size": 63488 01:30:00.559 }, 01:30:00.559 { 01:30:00.559 "name": null, 01:30:00.559 "uuid": "00000000-0000-0000-0000-000000000003", 01:30:00.559 "is_configured": false, 01:30:00.559 "data_offset": 2048, 01:30:00.559 "data_size": 63488 01:30:00.559 } 01:30:00.559 ] 01:30:00.559 }' 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:00.559 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.126 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 01:30:01.126 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:30:01.126 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 01:30:01.126 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:30:01.126 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.126 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.126 [2024-12-09 05:24:52.572820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:30:01.126 [2024-12-09 05:24:52.572921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:01.126 [2024-12-09 05:24:52.572953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:30:01.126 [2024-12-09 05:24:52.572972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:01.126 [2024-12-09 05:24:52.573647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:01.126 [2024-12-09 05:24:52.573685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:30:01.126 [2024-12-09 05:24:52.573809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:30:01.126 [2024-12-09 05:24:52.573851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:30:01.126 [2024-12-09 05:24:52.574014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:30:01.126 [2024-12-09 05:24:52.574035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:30:01.126 [2024-12-09 05:24:52.574351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:30:01.126 [2024-12-09 05:24:52.579582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:30:01.127 [2024-12-09 05:24:52.579742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:30:01.127 [2024-12-09 05:24:52.580106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:01.127 pt3 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:01.127 "name": "raid_bdev1", 01:30:01.127 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:30:01.127 "strip_size_kb": 64, 01:30:01.127 "state": "online", 01:30:01.127 "raid_level": "raid5f", 01:30:01.127 "superblock": true, 01:30:01.127 "num_base_bdevs": 3, 01:30:01.127 "num_base_bdevs_discovered": 2, 01:30:01.127 "num_base_bdevs_operational": 2, 01:30:01.127 "base_bdevs_list": [ 01:30:01.127 { 01:30:01.127 "name": null, 01:30:01.127 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:01.127 "is_configured": false, 01:30:01.127 "data_offset": 2048, 01:30:01.127 "data_size": 63488 01:30:01.127 }, 01:30:01.127 { 01:30:01.127 "name": "pt2", 01:30:01.127 "uuid": "00000000-0000-0000-0000-000000000002", 01:30:01.127 "is_configured": true, 01:30:01.127 "data_offset": 2048, 01:30:01.127 "data_size": 63488 01:30:01.127 }, 01:30:01.127 { 01:30:01.127 "name": "pt3", 01:30:01.127 "uuid": "00000000-0000-0000-0000-000000000003", 01:30:01.127 "is_configured": true, 01:30:01.127 "data_offset": 2048, 01:30:01.127 "data_size": 63488 01:30:01.127 } 01:30:01.127 ] 01:30:01.127 }' 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:01.127 05:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.693 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:30:01.693 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.693 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.693 [2024-12-09 05:24:53.094294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:30:01.693 [2024-12-09 05:24:53.094331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:30:01.693 [2024-12-09 05:24:53.094453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:01.693 [2024-12-09 05:24:53.094568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:01.693 [2024-12-09 05:24:53.094585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:30:01.693 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.693 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:01.693 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.694 [2024-12-09 05:24:53.166283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:30:01.694 [2024-12-09 05:24:53.166352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:01.694 [2024-12-09 05:24:53.166434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:30:01.694 [2024-12-09 05:24:53.166450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:01.694 [2024-12-09 05:24:53.169404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:01.694 [2024-12-09 05:24:53.169454] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:30:01.694 [2024-12-09 05:24:53.169551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:30:01.694 [2024-12-09 05:24:53.169610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:30:01.694 [2024-12-09 05:24:53.169832] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:30:01.694 [2024-12-09 05:24:53.169851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:30:01.694 [2024-12-09 05:24:53.169874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:30:01.694 [2024-12-09 05:24:53.169939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:30:01.694 pt1 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:01.694 "name": "raid_bdev1", 01:30:01.694 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:30:01.694 "strip_size_kb": 64, 01:30:01.694 "state": "configuring", 01:30:01.694 "raid_level": "raid5f", 01:30:01.694 "superblock": true, 01:30:01.694 "num_base_bdevs": 3, 01:30:01.694 "num_base_bdevs_discovered": 1, 01:30:01.694 "num_base_bdevs_operational": 2, 01:30:01.694 "base_bdevs_list": [ 01:30:01.694 { 01:30:01.694 "name": null, 01:30:01.694 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:01.694 "is_configured": false, 01:30:01.694 "data_offset": 2048, 01:30:01.694 "data_size": 63488 01:30:01.694 }, 01:30:01.694 { 01:30:01.694 "name": "pt2", 01:30:01.694 "uuid": "00000000-0000-0000-0000-000000000002", 01:30:01.694 "is_configured": true, 01:30:01.694 "data_offset": 2048, 01:30:01.694 "data_size": 63488 01:30:01.694 }, 01:30:01.694 { 01:30:01.694 "name": null, 01:30:01.694 "uuid": "00000000-0000-0000-0000-000000000003", 01:30:01.694 "is_configured": false, 01:30:01.694 "data_offset": 2048, 01:30:01.694 "data_size": 63488 01:30:01.694 } 01:30:01.694 ] 01:30:01.694 }' 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:01.694 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.261 [2024-12-09 05:24:53.746525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:30:02.261 [2024-12-09 05:24:53.746712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:02.261 [2024-12-09 05:24:53.746770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 01:30:02.261 [2024-12-09 05:24:53.746801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:02.261 [2024-12-09 05:24:53.747420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:02.261 [2024-12-09 05:24:53.747464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:30:02.261 [2024-12-09 05:24:53.747585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:30:02.261 [2024-12-09 05:24:53.747617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:30:02.261 [2024-12-09 05:24:53.747816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:30:02.261 [2024-12-09 05:24:53.747838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:30:02.261 [2024-12-09 05:24:53.748164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:30:02.261 [2024-12-09 05:24:53.753119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:30:02.261 [2024-12-09 05:24:53.753151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:30:02.261 [2024-12-09 05:24:53.753508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:02.261 pt3 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:02.261 "name": "raid_bdev1", 01:30:02.261 "uuid": "7e3c7383-afe0-495d-bc8f-03451a28431e", 01:30:02.261 "strip_size_kb": 64, 01:30:02.261 "state": "online", 01:30:02.261 "raid_level": "raid5f", 01:30:02.261 "superblock": true, 01:30:02.261 "num_base_bdevs": 3, 01:30:02.261 "num_base_bdevs_discovered": 2, 01:30:02.261 "num_base_bdevs_operational": 2, 01:30:02.261 "base_bdevs_list": [ 01:30:02.261 { 01:30:02.261 "name": null, 01:30:02.261 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:02.261 "is_configured": false, 01:30:02.261 "data_offset": 2048, 01:30:02.261 "data_size": 63488 01:30:02.261 }, 01:30:02.261 { 01:30:02.261 "name": "pt2", 01:30:02.261 "uuid": "00000000-0000-0000-0000-000000000002", 01:30:02.261 "is_configured": true, 01:30:02.261 "data_offset": 2048, 01:30:02.261 "data_size": 63488 01:30:02.261 }, 01:30:02.261 { 01:30:02.261 "name": "pt3", 01:30:02.261 "uuid": "00000000-0000-0000-0000-000000000003", 01:30:02.261 "is_configured": true, 01:30:02.261 "data_offset": 2048, 01:30:02.261 "data_size": 63488 01:30:02.261 } 01:30:02.261 ] 01:30:02.261 }' 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:02.261 05:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:02.830 [2024-12-09 05:24:54.319631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7e3c7383-afe0-495d-bc8f-03451a28431e '!=' 7e3c7383-afe0-495d-bc8f-03451a28431e ']' 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81422 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81422 ']' 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81422 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:02.830 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81422 01:30:02.830 killing process with pid 81422 01:30:02.831 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:02.831 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:02.831 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81422' 01:30:02.831 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81422 01:30:02.831 [2024-12-09 05:24:54.398216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:30:02.831 [2024-12-09 05:24:54.398306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:02.831 05:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81422 01:30:02.831 [2024-12-09 05:24:54.398417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:02.831 [2024-12-09 05:24:54.398439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:30:03.089 [2024-12-09 05:24:54.653296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:30:04.479 05:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:30:04.479 01:30:04.479 real 0m8.575s 01:30:04.479 user 0m13.984s 01:30:04.479 sys 0m1.233s 01:30:04.479 ************************************ 01:30:04.479 END TEST raid5f_superblock_test 01:30:04.479 ************************************ 01:30:04.479 05:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:04.479 05:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:30:04.479 05:24:55 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 01:30:04.479 05:24:55 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 01:30:04.479 05:24:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:30:04.479 05:24:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:04.479 05:24:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:30:04.479 ************************************ 01:30:04.479 START TEST raid5f_rebuild_test 01:30:04.479 ************************************ 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81873 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81873 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81873 ']' 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:04.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:04.479 05:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:04.479 I/O size of 3145728 is greater than zero copy threshold (65536). 01:30:04.479 Zero copy mechanism will not be used. 01:30:04.479 [2024-12-09 05:24:55.958612] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:04.479 [2024-12-09 05:24:55.958789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81873 ] 01:30:04.738 [2024-12-09 05:24:56.141561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:04.738 [2024-12-09 05:24:56.262914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:04.996 [2024-12-09 05:24:56.447979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:04.996 [2024-12-09 05:24:56.448044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 BaseBdev1_malloc 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 [2024-12-09 05:24:56.995132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:30:05.562 [2024-12-09 05:24:56.995218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:05.562 [2024-12-09 05:24:56.995249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:30:05.562 [2024-12-09 05:24:56.995283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:05.562 [2024-12-09 05:24:56.998283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:05.562 [2024-12-09 05:24:56.998341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:30:05.562 BaseBdev1 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 BaseBdev2_malloc 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 [2024-12-09 05:24:57.041388] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:30:05.562 [2024-12-09 05:24:57.041629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:05.562 [2024-12-09 05:24:57.041674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:30:05.562 [2024-12-09 05:24:57.041693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:05.562 [2024-12-09 05:24:57.044523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:05.562 [2024-12-09 05:24:57.044571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:30:05.562 BaseBdev2 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 BaseBdev3_malloc 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 [2024-12-09 05:24:57.097606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:30:05.562 [2024-12-09 05:24:57.097883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:05.562 [2024-12-09 05:24:57.097924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:30:05.562 [2024-12-09 05:24:57.097946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:05.562 [2024-12-09 05:24:57.100669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:05.562 [2024-12-09 05:24:57.100717] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:30:05.562 BaseBdev3 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 spare_malloc 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 spare_delay 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 [2024-12-09 05:24:57.152559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:30:05.562 [2024-12-09 05:24:57.152624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:05.562 [2024-12-09 05:24:57.152651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 01:30:05.562 [2024-12-09 05:24:57.152667] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:05.562 [2024-12-09 05:24:57.155381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:05.562 [2024-12-09 05:24:57.155452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:30:05.562 spare 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.562 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.562 [2024-12-09 05:24:57.160651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:30:05.563 [2024-12-09 05:24:57.163036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:30:05.563 [2024-12-09 05:24:57.163120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:30:05.563 [2024-12-09 05:24:57.163226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:30:05.563 [2024-12-09 05:24:57.163242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 01:30:05.563 [2024-12-09 05:24:57.163574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:30:05.563 [2024-12-09 05:24:57.168580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:30:05.563 [2024-12-09 05:24:57.168609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:30:05.563 [2024-12-09 05:24:57.168847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:05.563 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:05.821 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:05.821 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:05.821 "name": "raid_bdev1", 01:30:05.821 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:05.821 "strip_size_kb": 64, 01:30:05.821 "state": "online", 01:30:05.821 "raid_level": "raid5f", 01:30:05.821 "superblock": false, 01:30:05.821 "num_base_bdevs": 3, 01:30:05.821 "num_base_bdevs_discovered": 3, 01:30:05.821 "num_base_bdevs_operational": 3, 01:30:05.821 "base_bdevs_list": [ 01:30:05.821 { 01:30:05.821 "name": "BaseBdev1", 01:30:05.821 "uuid": "e230069b-398a-551e-9c88-e60ff4bacc30", 01:30:05.821 "is_configured": true, 01:30:05.821 "data_offset": 0, 01:30:05.821 "data_size": 65536 01:30:05.821 }, 01:30:05.821 { 01:30:05.821 "name": "BaseBdev2", 01:30:05.821 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:05.821 "is_configured": true, 01:30:05.821 "data_offset": 0, 01:30:05.821 "data_size": 65536 01:30:05.821 }, 01:30:05.821 { 01:30:05.821 "name": "BaseBdev3", 01:30:05.821 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:05.821 "is_configured": true, 01:30:05.821 "data_offset": 0, 01:30:05.821 "data_size": 65536 01:30:05.821 } 01:30:05.821 ] 01:30:05.821 }' 01:30:05.821 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:05.821 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:06.079 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:30:06.079 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:30:06.079 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.079 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:06.079 [2024-12-09 05:24:57.667803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:30:06.079 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:30:06.337 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:30:06.338 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:30:06.338 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:30:06.338 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:30:06.338 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:30:06.338 05:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:30:06.596 [2024-12-09 05:24:57.979682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:30:06.596 /dev/nbd0 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:30:06.596 1+0 records in 01:30:06.596 1+0 records out 01:30:06.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271407 s, 15.1 MB/s 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 01:30:06.596 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 01:30:07.181 512+0 records in 01:30:07.181 512+0 records out 01:30:07.181 67108864 bytes (67 MB, 64 MiB) copied, 0.532847 s, 126 MB/s 01:30:07.181 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:30:07.181 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:30:07.182 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:30:07.182 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:30:07.182 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:30:07.182 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:30:07.182 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:30:07.441 [2024-12-09 05:24:58.883087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:07.441 [2024-12-09 05:24:58.894475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:07.441 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:07.441 "name": "raid_bdev1", 01:30:07.441 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:07.441 "strip_size_kb": 64, 01:30:07.441 "state": "online", 01:30:07.441 "raid_level": "raid5f", 01:30:07.441 "superblock": false, 01:30:07.441 "num_base_bdevs": 3, 01:30:07.441 "num_base_bdevs_discovered": 2, 01:30:07.441 "num_base_bdevs_operational": 2, 01:30:07.441 "base_bdevs_list": [ 01:30:07.441 { 01:30:07.441 "name": null, 01:30:07.441 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:07.441 "is_configured": false, 01:30:07.441 "data_offset": 0, 01:30:07.441 "data_size": 65536 01:30:07.441 }, 01:30:07.441 { 01:30:07.441 "name": "BaseBdev2", 01:30:07.441 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:07.441 "is_configured": true, 01:30:07.441 "data_offset": 0, 01:30:07.441 "data_size": 65536 01:30:07.441 }, 01:30:07.441 { 01:30:07.442 "name": "BaseBdev3", 01:30:07.442 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:07.442 "is_configured": true, 01:30:07.442 "data_offset": 0, 01:30:07.442 "data_size": 65536 01:30:07.442 } 01:30:07.442 ] 01:30:07.442 }' 01:30:07.442 05:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:07.442 05:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:08.008 05:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:30:08.008 05:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:08.008 05:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:08.008 [2024-12-09 05:24:59.394651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:08.008 [2024-12-09 05:24:59.409644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 01:30:08.008 05:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:08.008 05:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 01:30:08.008 [2024-12-09 05:24:59.417013] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:08.942 "name": "raid_bdev1", 01:30:08.942 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:08.942 "strip_size_kb": 64, 01:30:08.942 "state": "online", 01:30:08.942 "raid_level": "raid5f", 01:30:08.942 "superblock": false, 01:30:08.942 "num_base_bdevs": 3, 01:30:08.942 "num_base_bdevs_discovered": 3, 01:30:08.942 "num_base_bdevs_operational": 3, 01:30:08.942 "process": { 01:30:08.942 "type": "rebuild", 01:30:08.942 "target": "spare", 01:30:08.942 "progress": { 01:30:08.942 "blocks": 18432, 01:30:08.942 "percent": 14 01:30:08.942 } 01:30:08.942 }, 01:30:08.942 "base_bdevs_list": [ 01:30:08.942 { 01:30:08.942 "name": "spare", 01:30:08.942 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:08.942 "is_configured": true, 01:30:08.942 "data_offset": 0, 01:30:08.942 "data_size": 65536 01:30:08.942 }, 01:30:08.942 { 01:30:08.942 "name": "BaseBdev2", 01:30:08.942 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:08.942 "is_configured": true, 01:30:08.942 "data_offset": 0, 01:30:08.942 "data_size": 65536 01:30:08.942 }, 01:30:08.942 { 01:30:08.942 "name": "BaseBdev3", 01:30:08.942 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:08.942 "is_configured": true, 01:30:08.942 "data_offset": 0, 01:30:08.942 "data_size": 65536 01:30:08.942 } 01:30:08.942 ] 01:30:08.942 }' 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:08.942 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:09.200 [2024-12-09 05:25:00.574495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:09.200 [2024-12-09 05:25:00.629552] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:30:09.200 [2024-12-09 05:25:00.629640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:09.200 [2024-12-09 05:25:00.629668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:09.200 [2024-12-09 05:25:00.629679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:09.200 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:09.200 "name": "raid_bdev1", 01:30:09.200 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:09.200 "strip_size_kb": 64, 01:30:09.200 "state": "online", 01:30:09.200 "raid_level": "raid5f", 01:30:09.200 "superblock": false, 01:30:09.201 "num_base_bdevs": 3, 01:30:09.201 "num_base_bdevs_discovered": 2, 01:30:09.201 "num_base_bdevs_operational": 2, 01:30:09.201 "base_bdevs_list": [ 01:30:09.201 { 01:30:09.201 "name": null, 01:30:09.201 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:09.201 "is_configured": false, 01:30:09.201 "data_offset": 0, 01:30:09.201 "data_size": 65536 01:30:09.201 }, 01:30:09.201 { 01:30:09.201 "name": "BaseBdev2", 01:30:09.201 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:09.201 "is_configured": true, 01:30:09.201 "data_offset": 0, 01:30:09.201 "data_size": 65536 01:30:09.201 }, 01:30:09.201 { 01:30:09.201 "name": "BaseBdev3", 01:30:09.201 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:09.201 "is_configured": true, 01:30:09.201 "data_offset": 0, 01:30:09.201 "data_size": 65536 01:30:09.201 } 01:30:09.201 ] 01:30:09.201 }' 01:30:09.201 05:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:09.201 05:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:09.768 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:09.768 "name": "raid_bdev1", 01:30:09.768 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:09.768 "strip_size_kb": 64, 01:30:09.768 "state": "online", 01:30:09.768 "raid_level": "raid5f", 01:30:09.768 "superblock": false, 01:30:09.768 "num_base_bdevs": 3, 01:30:09.768 "num_base_bdevs_discovered": 2, 01:30:09.768 "num_base_bdevs_operational": 2, 01:30:09.768 "base_bdevs_list": [ 01:30:09.768 { 01:30:09.768 "name": null, 01:30:09.768 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:09.768 "is_configured": false, 01:30:09.768 "data_offset": 0, 01:30:09.768 "data_size": 65536 01:30:09.768 }, 01:30:09.768 { 01:30:09.768 "name": "BaseBdev2", 01:30:09.768 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:09.768 "is_configured": true, 01:30:09.768 "data_offset": 0, 01:30:09.768 "data_size": 65536 01:30:09.768 }, 01:30:09.768 { 01:30:09.768 "name": "BaseBdev3", 01:30:09.768 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:09.768 "is_configured": true, 01:30:09.769 "data_offset": 0, 01:30:09.769 "data_size": 65536 01:30:09.769 } 01:30:09.769 ] 01:30:09.769 }' 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:09.769 [2024-12-09 05:25:01.359265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:09.769 [2024-12-09 05:25:01.374289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:09.769 05:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 01:30:09.769 [2024-12-09 05:25:01.381727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:11.143 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:11.143 "name": "raid_bdev1", 01:30:11.143 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:11.143 "strip_size_kb": 64, 01:30:11.143 "state": "online", 01:30:11.143 "raid_level": "raid5f", 01:30:11.143 "superblock": false, 01:30:11.143 "num_base_bdevs": 3, 01:30:11.143 "num_base_bdevs_discovered": 3, 01:30:11.144 "num_base_bdevs_operational": 3, 01:30:11.144 "process": { 01:30:11.144 "type": "rebuild", 01:30:11.144 "target": "spare", 01:30:11.144 "progress": { 01:30:11.144 "blocks": 18432, 01:30:11.144 "percent": 14 01:30:11.144 } 01:30:11.144 }, 01:30:11.144 "base_bdevs_list": [ 01:30:11.144 { 01:30:11.144 "name": "spare", 01:30:11.144 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:11.144 "is_configured": true, 01:30:11.144 "data_offset": 0, 01:30:11.144 "data_size": 65536 01:30:11.144 }, 01:30:11.144 { 01:30:11.144 "name": "BaseBdev2", 01:30:11.144 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:11.144 "is_configured": true, 01:30:11.144 "data_offset": 0, 01:30:11.144 "data_size": 65536 01:30:11.144 }, 01:30:11.144 { 01:30:11.144 "name": "BaseBdev3", 01:30:11.144 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:11.144 "is_configured": true, 01:30:11.144 "data_offset": 0, 01:30:11.144 "data_size": 65536 01:30:11.144 } 01:30:11.144 ] 01:30:11.144 }' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=604 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:11.144 "name": "raid_bdev1", 01:30:11.144 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:11.144 "strip_size_kb": 64, 01:30:11.144 "state": "online", 01:30:11.144 "raid_level": "raid5f", 01:30:11.144 "superblock": false, 01:30:11.144 "num_base_bdevs": 3, 01:30:11.144 "num_base_bdevs_discovered": 3, 01:30:11.144 "num_base_bdevs_operational": 3, 01:30:11.144 "process": { 01:30:11.144 "type": "rebuild", 01:30:11.144 "target": "spare", 01:30:11.144 "progress": { 01:30:11.144 "blocks": 22528, 01:30:11.144 "percent": 17 01:30:11.144 } 01:30:11.144 }, 01:30:11.144 "base_bdevs_list": [ 01:30:11.144 { 01:30:11.144 "name": "spare", 01:30:11.144 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:11.144 "is_configured": true, 01:30:11.144 "data_offset": 0, 01:30:11.144 "data_size": 65536 01:30:11.144 }, 01:30:11.144 { 01:30:11.144 "name": "BaseBdev2", 01:30:11.144 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:11.144 "is_configured": true, 01:30:11.144 "data_offset": 0, 01:30:11.144 "data_size": 65536 01:30:11.144 }, 01:30:11.144 { 01:30:11.144 "name": "BaseBdev3", 01:30:11.144 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:11.144 "is_configured": true, 01:30:11.144 "data_offset": 0, 01:30:11.144 "data_size": 65536 01:30:11.144 } 01:30:11.144 ] 01:30:11.144 }' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:11.144 05:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:12.518 "name": "raid_bdev1", 01:30:12.518 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:12.518 "strip_size_kb": 64, 01:30:12.518 "state": "online", 01:30:12.518 "raid_level": "raid5f", 01:30:12.518 "superblock": false, 01:30:12.518 "num_base_bdevs": 3, 01:30:12.518 "num_base_bdevs_discovered": 3, 01:30:12.518 "num_base_bdevs_operational": 3, 01:30:12.518 "process": { 01:30:12.518 "type": "rebuild", 01:30:12.518 "target": "spare", 01:30:12.518 "progress": { 01:30:12.518 "blocks": 47104, 01:30:12.518 "percent": 35 01:30:12.518 } 01:30:12.518 }, 01:30:12.518 "base_bdevs_list": [ 01:30:12.518 { 01:30:12.518 "name": "spare", 01:30:12.518 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:12.518 "is_configured": true, 01:30:12.518 "data_offset": 0, 01:30:12.518 "data_size": 65536 01:30:12.518 }, 01:30:12.518 { 01:30:12.518 "name": "BaseBdev2", 01:30:12.518 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:12.518 "is_configured": true, 01:30:12.518 "data_offset": 0, 01:30:12.518 "data_size": 65536 01:30:12.518 }, 01:30:12.518 { 01:30:12.518 "name": "BaseBdev3", 01:30:12.518 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:12.518 "is_configured": true, 01:30:12.518 "data_offset": 0, 01:30:12.518 "data_size": 65536 01:30:12.518 } 01:30:12.518 ] 01:30:12.518 }' 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:12.518 05:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:13.451 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:13.451 "name": "raid_bdev1", 01:30:13.451 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:13.451 "strip_size_kb": 64, 01:30:13.451 "state": "online", 01:30:13.451 "raid_level": "raid5f", 01:30:13.451 "superblock": false, 01:30:13.451 "num_base_bdevs": 3, 01:30:13.451 "num_base_bdevs_discovered": 3, 01:30:13.451 "num_base_bdevs_operational": 3, 01:30:13.451 "process": { 01:30:13.451 "type": "rebuild", 01:30:13.451 "target": "spare", 01:30:13.451 "progress": { 01:30:13.451 "blocks": 69632, 01:30:13.451 "percent": 53 01:30:13.451 } 01:30:13.452 }, 01:30:13.452 "base_bdevs_list": [ 01:30:13.452 { 01:30:13.452 "name": "spare", 01:30:13.452 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:13.452 "is_configured": true, 01:30:13.452 "data_offset": 0, 01:30:13.452 "data_size": 65536 01:30:13.452 }, 01:30:13.452 { 01:30:13.452 "name": "BaseBdev2", 01:30:13.452 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:13.452 "is_configured": true, 01:30:13.452 "data_offset": 0, 01:30:13.452 "data_size": 65536 01:30:13.452 }, 01:30:13.452 { 01:30:13.452 "name": "BaseBdev3", 01:30:13.452 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:13.452 "is_configured": true, 01:30:13.452 "data_offset": 0, 01:30:13.452 "data_size": 65536 01:30:13.452 } 01:30:13.452 ] 01:30:13.452 }' 01:30:13.452 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:13.452 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:13.452 05:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:13.452 05:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:13.452 05:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:14.823 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:14.823 "name": "raid_bdev1", 01:30:14.823 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:14.823 "strip_size_kb": 64, 01:30:14.823 "state": "online", 01:30:14.823 "raid_level": "raid5f", 01:30:14.823 "superblock": false, 01:30:14.823 "num_base_bdevs": 3, 01:30:14.823 "num_base_bdevs_discovered": 3, 01:30:14.823 "num_base_bdevs_operational": 3, 01:30:14.823 "process": { 01:30:14.823 "type": "rebuild", 01:30:14.823 "target": "spare", 01:30:14.823 "progress": { 01:30:14.823 "blocks": 94208, 01:30:14.823 "percent": 71 01:30:14.823 } 01:30:14.823 }, 01:30:14.823 "base_bdevs_list": [ 01:30:14.823 { 01:30:14.823 "name": "spare", 01:30:14.824 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:14.824 "is_configured": true, 01:30:14.824 "data_offset": 0, 01:30:14.824 "data_size": 65536 01:30:14.824 }, 01:30:14.824 { 01:30:14.824 "name": "BaseBdev2", 01:30:14.824 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:14.824 "is_configured": true, 01:30:14.824 "data_offset": 0, 01:30:14.824 "data_size": 65536 01:30:14.824 }, 01:30:14.824 { 01:30:14.824 "name": "BaseBdev3", 01:30:14.824 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:14.824 "is_configured": true, 01:30:14.824 "data_offset": 0, 01:30:14.824 "data_size": 65536 01:30:14.824 } 01:30:14.824 ] 01:30:14.824 }' 01:30:14.824 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:14.824 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:14.824 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:14.824 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:14.824 05:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:15.759 "name": "raid_bdev1", 01:30:15.759 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:15.759 "strip_size_kb": 64, 01:30:15.759 "state": "online", 01:30:15.759 "raid_level": "raid5f", 01:30:15.759 "superblock": false, 01:30:15.759 "num_base_bdevs": 3, 01:30:15.759 "num_base_bdevs_discovered": 3, 01:30:15.759 "num_base_bdevs_operational": 3, 01:30:15.759 "process": { 01:30:15.759 "type": "rebuild", 01:30:15.759 "target": "spare", 01:30:15.759 "progress": { 01:30:15.759 "blocks": 116736, 01:30:15.759 "percent": 89 01:30:15.759 } 01:30:15.759 }, 01:30:15.759 "base_bdevs_list": [ 01:30:15.759 { 01:30:15.759 "name": "spare", 01:30:15.759 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:15.759 "is_configured": true, 01:30:15.759 "data_offset": 0, 01:30:15.759 "data_size": 65536 01:30:15.759 }, 01:30:15.759 { 01:30:15.759 "name": "BaseBdev2", 01:30:15.759 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:15.759 "is_configured": true, 01:30:15.759 "data_offset": 0, 01:30:15.759 "data_size": 65536 01:30:15.759 }, 01:30:15.759 { 01:30:15.759 "name": "BaseBdev3", 01:30:15.759 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:15.759 "is_configured": true, 01:30:15.759 "data_offset": 0, 01:30:15.759 "data_size": 65536 01:30:15.759 } 01:30:15.759 ] 01:30:15.759 }' 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:15.759 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:15.760 05:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:16.327 [2024-12-09 05:25:07.853999] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:30:16.327 [2024-12-09 05:25:07.854100] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:30:16.327 [2024-12-09 05:25:07.854184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.972 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:16.972 "name": "raid_bdev1", 01:30:16.972 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:16.972 "strip_size_kb": 64, 01:30:16.972 "state": "online", 01:30:16.972 "raid_level": "raid5f", 01:30:16.972 "superblock": false, 01:30:16.972 "num_base_bdevs": 3, 01:30:16.972 "num_base_bdevs_discovered": 3, 01:30:16.972 "num_base_bdevs_operational": 3, 01:30:16.972 "base_bdevs_list": [ 01:30:16.972 { 01:30:16.972 "name": "spare", 01:30:16.972 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:16.973 "is_configured": true, 01:30:16.973 "data_offset": 0, 01:30:16.973 "data_size": 65536 01:30:16.973 }, 01:30:16.973 { 01:30:16.973 "name": "BaseBdev2", 01:30:16.973 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:16.973 "is_configured": true, 01:30:16.973 "data_offset": 0, 01:30:16.973 "data_size": 65536 01:30:16.973 }, 01:30:16.973 { 01:30:16.973 "name": "BaseBdev3", 01:30:16.973 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:16.973 "is_configured": true, 01:30:16.973 "data_offset": 0, 01:30:16.973 "data_size": 65536 01:30:16.973 } 01:30:16.973 ] 01:30:16.973 }' 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:16.973 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.231 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:17.231 "name": "raid_bdev1", 01:30:17.231 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:17.231 "strip_size_kb": 64, 01:30:17.231 "state": "online", 01:30:17.231 "raid_level": "raid5f", 01:30:17.231 "superblock": false, 01:30:17.231 "num_base_bdevs": 3, 01:30:17.231 "num_base_bdevs_discovered": 3, 01:30:17.231 "num_base_bdevs_operational": 3, 01:30:17.231 "base_bdevs_list": [ 01:30:17.231 { 01:30:17.231 "name": "spare", 01:30:17.231 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:17.231 "is_configured": true, 01:30:17.231 "data_offset": 0, 01:30:17.231 "data_size": 65536 01:30:17.232 }, 01:30:17.232 { 01:30:17.232 "name": "BaseBdev2", 01:30:17.232 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:17.232 "is_configured": true, 01:30:17.232 "data_offset": 0, 01:30:17.232 "data_size": 65536 01:30:17.232 }, 01:30:17.232 { 01:30:17.232 "name": "BaseBdev3", 01:30:17.232 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:17.232 "is_configured": true, 01:30:17.232 "data_offset": 0, 01:30:17.232 "data_size": 65536 01:30:17.232 } 01:30:17.232 ] 01:30:17.232 }' 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:17.232 "name": "raid_bdev1", 01:30:17.232 "uuid": "8b7bb20d-1db5-4095-a948-c06f29dc1557", 01:30:17.232 "strip_size_kb": 64, 01:30:17.232 "state": "online", 01:30:17.232 "raid_level": "raid5f", 01:30:17.232 "superblock": false, 01:30:17.232 "num_base_bdevs": 3, 01:30:17.232 "num_base_bdevs_discovered": 3, 01:30:17.232 "num_base_bdevs_operational": 3, 01:30:17.232 "base_bdevs_list": [ 01:30:17.232 { 01:30:17.232 "name": "spare", 01:30:17.232 "uuid": "93f41c42-90ac-5375-a7d3-f7408faa04dc", 01:30:17.232 "is_configured": true, 01:30:17.232 "data_offset": 0, 01:30:17.232 "data_size": 65536 01:30:17.232 }, 01:30:17.232 { 01:30:17.232 "name": "BaseBdev2", 01:30:17.232 "uuid": "df9ad6d9-2d18-5bea-9662-93598f7996dc", 01:30:17.232 "is_configured": true, 01:30:17.232 "data_offset": 0, 01:30:17.232 "data_size": 65536 01:30:17.232 }, 01:30:17.232 { 01:30:17.232 "name": "BaseBdev3", 01:30:17.232 "uuid": "9099804d-0d01-5042-82e7-dfdb5c81c43d", 01:30:17.232 "is_configured": true, 01:30:17.232 "data_offset": 0, 01:30:17.232 "data_size": 65536 01:30:17.232 } 01:30:17.232 ] 01:30:17.232 }' 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:17.232 05:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:17.798 [2024-12-09 05:25:09.233845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:30:17.798 [2024-12-09 05:25:09.233881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:30:17.798 [2024-12-09 05:25:09.233990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:17.798 [2024-12-09 05:25:09.234099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:17.798 [2024-12-09 05:25:09.234125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:30:17.798 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:30:18.057 /dev/nbd0 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:30:18.057 1+0 records in 01:30:18.057 1+0 records out 01:30:18.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481922 s, 8.5 MB/s 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:30:18.057 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:30:18.624 /dev/nbd1 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:30:18.624 1+0 records in 01:30:18.624 1+0 records out 01:30:18.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055497 s, 7.4 MB/s 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:30:18.624 05:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:30:18.624 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:30:19.194 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81873 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81873 ']' 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81873 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81873 01:30:19.452 killing process with pid 81873 01:30:19.452 Received shutdown signal, test time was about 60.000000 seconds 01:30:19.452 01:30:19.452 Latency(us) 01:30:19.452 [2024-12-09T05:25:11.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:30:19.452 [2024-12-09T05:25:11.069Z] =================================================================================================================== 01:30:19.452 [2024-12-09T05:25:11.069Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81873' 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81873 01:30:19.452 [2024-12-09 05:25:10.897845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:30:19.452 05:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81873 01:30:19.709 [2024-12-09 05:25:11.237186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:30:21.083 ************************************ 01:30:21.083 END TEST raid5f_rebuild_test 01:30:21.083 ************************************ 01:30:21.083 05:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 01:30:21.083 01:30:21.083 real 0m16.532s 01:30:21.083 user 0m21.069s 01:30:21.083 sys 0m2.172s 01:30:21.083 05:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:21.083 05:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:30:21.083 05:25:12 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 01:30:21.083 05:25:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:30:21.083 05:25:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:21.083 05:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:30:21.083 ************************************ 01:30:21.084 START TEST raid5f_rebuild_test_sb 01:30:21.084 ************************************ 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82332 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82332 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82332 ']' 01:30:21.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:21.084 05:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:21.084 I/O size of 3145728 is greater than zero copy threshold (65536). 01:30:21.084 Zero copy mechanism will not be used. 01:30:21.084 [2024-12-09 05:25:12.513710] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:21.084 [2024-12-09 05:25:12.513931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82332 ] 01:30:21.342 [2024-12-09 05:25:12.699266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:21.342 [2024-12-09 05:25:12.830261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:21.599 [2024-12-09 05:25:13.037917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:21.599 [2024-12-09 05:25:13.037991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.165 BaseBdev1_malloc 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.165 [2024-12-09 05:25:13.565342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:30:22.165 [2024-12-09 05:25:13.565622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:22.165 [2024-12-09 05:25:13.565669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:30:22.165 [2024-12-09 05:25:13.565693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:22.165 [2024-12-09 05:25:13.568703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:22.165 [2024-12-09 05:25:13.568908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:30:22.165 BaseBdev1 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.165 BaseBdev2_malloc 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.165 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.165 [2024-12-09 05:25:13.624782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:30:22.165 [2024-12-09 05:25:13.624890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:22.166 [2024-12-09 05:25:13.624950] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:30:22.166 [2024-12-09 05:25:13.624971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:22.166 [2024-12-09 05:25:13.628005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:22.166 [2024-12-09 05:25:13.628057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:30:22.166 BaseBdev2 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.166 BaseBdev3_malloc 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.166 [2024-12-09 05:25:13.691213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:30:22.166 [2024-12-09 05:25:13.691297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:22.166 [2024-12-09 05:25:13.691330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:30:22.166 [2024-12-09 05:25:13.691349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:22.166 [2024-12-09 05:25:13.694378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:22.166 [2024-12-09 05:25:13.694438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:30:22.166 BaseBdev3 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.166 spare_malloc 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.166 spare_delay 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.166 [2024-12-09 05:25:13.750650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:30:22.166 [2024-12-09 05:25:13.750740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:22.166 [2024-12-09 05:25:13.750777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 01:30:22.166 [2024-12-09 05:25:13.750805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:22.166 [2024-12-09 05:25:13.754578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:22.166 [2024-12-09 05:25:13.754633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:30:22.166 spare 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.166 [2024-12-09 05:25:13.762862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:30:22.166 [2024-12-09 05:25:13.765598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:30:22.166 [2024-12-09 05:25:13.765694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:30:22.166 [2024-12-09 05:25:13.765976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:30:22.166 [2024-12-09 05:25:13.765997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:30:22.166 [2024-12-09 05:25:13.766405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:30:22.166 [2024-12-09 05:25:13.772283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:30:22.166 [2024-12-09 05:25:13.772536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:30:22.166 [2024-12-09 05:25:13.772793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:22.166 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:22.424 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.424 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.424 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:22.424 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.424 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:22.424 "name": "raid_bdev1", 01:30:22.424 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:22.424 "strip_size_kb": 64, 01:30:22.424 "state": "online", 01:30:22.424 "raid_level": "raid5f", 01:30:22.424 "superblock": true, 01:30:22.424 "num_base_bdevs": 3, 01:30:22.424 "num_base_bdevs_discovered": 3, 01:30:22.424 "num_base_bdevs_operational": 3, 01:30:22.424 "base_bdevs_list": [ 01:30:22.424 { 01:30:22.424 "name": "BaseBdev1", 01:30:22.424 "uuid": "4bc61f8f-265d-5d27-8ad6-2bf3b50b2e72", 01:30:22.424 "is_configured": true, 01:30:22.424 "data_offset": 2048, 01:30:22.424 "data_size": 63488 01:30:22.424 }, 01:30:22.424 { 01:30:22.424 "name": "BaseBdev2", 01:30:22.424 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:22.424 "is_configured": true, 01:30:22.424 "data_offset": 2048, 01:30:22.424 "data_size": 63488 01:30:22.424 }, 01:30:22.424 { 01:30:22.424 "name": "BaseBdev3", 01:30:22.424 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:22.424 "is_configured": true, 01:30:22.424 "data_offset": 2048, 01:30:22.424 "data_size": 63488 01:30:22.424 } 01:30:22.424 ] 01:30:22.424 }' 01:30:22.425 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:22.425 05:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.683 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:30:22.683 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.683 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.683 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:30:22.942 [2024-12-09 05:25:14.299146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:30:22.942 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:30:23.200 [2024-12-09 05:25:14.691040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:30:23.200 /dev/nbd0 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:30:23.200 1+0 records in 01:30:23.200 1+0 records out 01:30:23.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114581 s, 3.6 MB/s 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 01:30:23.200 05:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 01:30:23.765 496+0 records in 01:30:23.765 496+0 records out 01:30:23.765 65011712 bytes (65 MB, 62 MiB) copied, 0.483723 s, 134 MB/s 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:30:23.765 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:30:24.023 [2024-12-09 05:25:15.547987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:24.023 [2024-12-09 05:25:15.561805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:24.023 "name": "raid_bdev1", 01:30:24.023 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:24.023 "strip_size_kb": 64, 01:30:24.023 "state": "online", 01:30:24.023 "raid_level": "raid5f", 01:30:24.023 "superblock": true, 01:30:24.023 "num_base_bdevs": 3, 01:30:24.023 "num_base_bdevs_discovered": 2, 01:30:24.023 "num_base_bdevs_operational": 2, 01:30:24.023 "base_bdevs_list": [ 01:30:24.023 { 01:30:24.023 "name": null, 01:30:24.023 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:24.023 "is_configured": false, 01:30:24.023 "data_offset": 0, 01:30:24.023 "data_size": 63488 01:30:24.023 }, 01:30:24.023 { 01:30:24.023 "name": "BaseBdev2", 01:30:24.023 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:24.023 "is_configured": true, 01:30:24.023 "data_offset": 2048, 01:30:24.023 "data_size": 63488 01:30:24.023 }, 01:30:24.023 { 01:30:24.023 "name": "BaseBdev3", 01:30:24.023 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:24.023 "is_configured": true, 01:30:24.023 "data_offset": 2048, 01:30:24.023 "data_size": 63488 01:30:24.023 } 01:30:24.023 ] 01:30:24.023 }' 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:24.023 05:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:24.654 05:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:30:24.654 05:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:24.654 05:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:24.654 [2024-12-09 05:25:16.073960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:24.654 [2024-12-09 05:25:16.089244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 01:30:24.654 05:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:24.654 05:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 01:30:24.654 [2024-12-09 05:25:16.096633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:25.590 "name": "raid_bdev1", 01:30:25.590 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:25.590 "strip_size_kb": 64, 01:30:25.590 "state": "online", 01:30:25.590 "raid_level": "raid5f", 01:30:25.590 "superblock": true, 01:30:25.590 "num_base_bdevs": 3, 01:30:25.590 "num_base_bdevs_discovered": 3, 01:30:25.590 "num_base_bdevs_operational": 3, 01:30:25.590 "process": { 01:30:25.590 "type": "rebuild", 01:30:25.590 "target": "spare", 01:30:25.590 "progress": { 01:30:25.590 "blocks": 18432, 01:30:25.590 "percent": 14 01:30:25.590 } 01:30:25.590 }, 01:30:25.590 "base_bdevs_list": [ 01:30:25.590 { 01:30:25.590 "name": "spare", 01:30:25.590 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:25.590 "is_configured": true, 01:30:25.590 "data_offset": 2048, 01:30:25.590 "data_size": 63488 01:30:25.590 }, 01:30:25.590 { 01:30:25.590 "name": "BaseBdev2", 01:30:25.590 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:25.590 "is_configured": true, 01:30:25.590 "data_offset": 2048, 01:30:25.590 "data_size": 63488 01:30:25.590 }, 01:30:25.590 { 01:30:25.590 "name": "BaseBdev3", 01:30:25.590 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:25.590 "is_configured": true, 01:30:25.590 "data_offset": 2048, 01:30:25.590 "data_size": 63488 01:30:25.590 } 01:30:25.590 ] 01:30:25.590 }' 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:25.590 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:25.849 [2024-12-09 05:25:17.258202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:25.849 [2024-12-09 05:25:17.311348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:30:25.849 [2024-12-09 05:25:17.311630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:25.849 [2024-12-09 05:25:17.311778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:25.849 [2024-12-09 05:25:17.311832] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:25.849 "name": "raid_bdev1", 01:30:25.849 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:25.849 "strip_size_kb": 64, 01:30:25.849 "state": "online", 01:30:25.849 "raid_level": "raid5f", 01:30:25.849 "superblock": true, 01:30:25.849 "num_base_bdevs": 3, 01:30:25.849 "num_base_bdevs_discovered": 2, 01:30:25.849 "num_base_bdevs_operational": 2, 01:30:25.849 "base_bdevs_list": [ 01:30:25.849 { 01:30:25.849 "name": null, 01:30:25.849 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:25.849 "is_configured": false, 01:30:25.849 "data_offset": 0, 01:30:25.849 "data_size": 63488 01:30:25.849 }, 01:30:25.849 { 01:30:25.849 "name": "BaseBdev2", 01:30:25.849 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:25.849 "is_configured": true, 01:30:25.849 "data_offset": 2048, 01:30:25.849 "data_size": 63488 01:30:25.849 }, 01:30:25.849 { 01:30:25.849 "name": "BaseBdev3", 01:30:25.849 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:25.849 "is_configured": true, 01:30:25.849 "data_offset": 2048, 01:30:25.849 "data_size": 63488 01:30:25.849 } 01:30:25.849 ] 01:30:25.849 }' 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:25.849 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:26.417 "name": "raid_bdev1", 01:30:26.417 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:26.417 "strip_size_kb": 64, 01:30:26.417 "state": "online", 01:30:26.417 "raid_level": "raid5f", 01:30:26.417 "superblock": true, 01:30:26.417 "num_base_bdevs": 3, 01:30:26.417 "num_base_bdevs_discovered": 2, 01:30:26.417 "num_base_bdevs_operational": 2, 01:30:26.417 "base_bdevs_list": [ 01:30:26.417 { 01:30:26.417 "name": null, 01:30:26.417 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:26.417 "is_configured": false, 01:30:26.417 "data_offset": 0, 01:30:26.417 "data_size": 63488 01:30:26.417 }, 01:30:26.417 { 01:30:26.417 "name": "BaseBdev2", 01:30:26.417 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:26.417 "is_configured": true, 01:30:26.417 "data_offset": 2048, 01:30:26.417 "data_size": 63488 01:30:26.417 }, 01:30:26.417 { 01:30:26.417 "name": "BaseBdev3", 01:30:26.417 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:26.417 "is_configured": true, 01:30:26.417 "data_offset": 2048, 01:30:26.417 "data_size": 63488 01:30:26.417 } 01:30:26.417 ] 01:30:26.417 }' 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:26.417 05:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:26.417 05:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:26.417 05:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:30:26.417 05:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:26.417 05:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:26.417 [2024-12-09 05:25:18.019346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:26.675 [2024-12-09 05:25:18.033882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 01:30:26.675 05:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:26.675 05:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 01:30:26.675 [2024-12-09 05:25:18.041156] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:27.608 "name": "raid_bdev1", 01:30:27.608 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:27.608 "strip_size_kb": 64, 01:30:27.608 "state": "online", 01:30:27.608 "raid_level": "raid5f", 01:30:27.608 "superblock": true, 01:30:27.608 "num_base_bdevs": 3, 01:30:27.608 "num_base_bdevs_discovered": 3, 01:30:27.608 "num_base_bdevs_operational": 3, 01:30:27.608 "process": { 01:30:27.608 "type": "rebuild", 01:30:27.608 "target": "spare", 01:30:27.608 "progress": { 01:30:27.608 "blocks": 18432, 01:30:27.608 "percent": 14 01:30:27.608 } 01:30:27.608 }, 01:30:27.608 "base_bdevs_list": [ 01:30:27.608 { 01:30:27.608 "name": "spare", 01:30:27.608 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:27.608 "is_configured": true, 01:30:27.608 "data_offset": 2048, 01:30:27.608 "data_size": 63488 01:30:27.608 }, 01:30:27.608 { 01:30:27.608 "name": "BaseBdev2", 01:30:27.608 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:27.608 "is_configured": true, 01:30:27.608 "data_offset": 2048, 01:30:27.608 "data_size": 63488 01:30:27.608 }, 01:30:27.608 { 01:30:27.608 "name": "BaseBdev3", 01:30:27.608 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:27.608 "is_configured": true, 01:30:27.608 "data_offset": 2048, 01:30:27.608 "data_size": 63488 01:30:27.608 } 01:30:27.608 ] 01:30:27.608 }' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:30:27.608 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=621 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:27.608 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:27.865 "name": "raid_bdev1", 01:30:27.865 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:27.865 "strip_size_kb": 64, 01:30:27.865 "state": "online", 01:30:27.865 "raid_level": "raid5f", 01:30:27.865 "superblock": true, 01:30:27.865 "num_base_bdevs": 3, 01:30:27.865 "num_base_bdevs_discovered": 3, 01:30:27.865 "num_base_bdevs_operational": 3, 01:30:27.865 "process": { 01:30:27.865 "type": "rebuild", 01:30:27.865 "target": "spare", 01:30:27.865 "progress": { 01:30:27.865 "blocks": 22528, 01:30:27.865 "percent": 17 01:30:27.865 } 01:30:27.865 }, 01:30:27.865 "base_bdevs_list": [ 01:30:27.865 { 01:30:27.865 "name": "spare", 01:30:27.865 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:27.865 "is_configured": true, 01:30:27.865 "data_offset": 2048, 01:30:27.865 "data_size": 63488 01:30:27.865 }, 01:30:27.865 { 01:30:27.865 "name": "BaseBdev2", 01:30:27.865 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:27.865 "is_configured": true, 01:30:27.865 "data_offset": 2048, 01:30:27.865 "data_size": 63488 01:30:27.865 }, 01:30:27.865 { 01:30:27.865 "name": "BaseBdev3", 01:30:27.865 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:27.865 "is_configured": true, 01:30:27.865 "data_offset": 2048, 01:30:27.865 "data_size": 63488 01:30:27.865 } 01:30:27.865 ] 01:30:27.865 }' 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:27.865 05:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:28.797 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:29.055 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:29.055 "name": "raid_bdev1", 01:30:29.055 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:29.055 "strip_size_kb": 64, 01:30:29.055 "state": "online", 01:30:29.055 "raid_level": "raid5f", 01:30:29.055 "superblock": true, 01:30:29.055 "num_base_bdevs": 3, 01:30:29.055 "num_base_bdevs_discovered": 3, 01:30:29.055 "num_base_bdevs_operational": 3, 01:30:29.055 "process": { 01:30:29.055 "type": "rebuild", 01:30:29.055 "target": "spare", 01:30:29.055 "progress": { 01:30:29.055 "blocks": 47104, 01:30:29.055 "percent": 37 01:30:29.055 } 01:30:29.055 }, 01:30:29.055 "base_bdevs_list": [ 01:30:29.055 { 01:30:29.055 "name": "spare", 01:30:29.055 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:29.055 "is_configured": true, 01:30:29.055 "data_offset": 2048, 01:30:29.055 "data_size": 63488 01:30:29.055 }, 01:30:29.055 { 01:30:29.055 "name": "BaseBdev2", 01:30:29.055 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:29.055 "is_configured": true, 01:30:29.055 "data_offset": 2048, 01:30:29.055 "data_size": 63488 01:30:29.055 }, 01:30:29.055 { 01:30:29.055 "name": "BaseBdev3", 01:30:29.055 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:29.055 "is_configured": true, 01:30:29.055 "data_offset": 2048, 01:30:29.055 "data_size": 63488 01:30:29.055 } 01:30:29.055 ] 01:30:29.055 }' 01:30:29.055 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:29.055 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:29.055 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:29.055 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:29.055 05:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:29.987 "name": "raid_bdev1", 01:30:29.987 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:29.987 "strip_size_kb": 64, 01:30:29.987 "state": "online", 01:30:29.987 "raid_level": "raid5f", 01:30:29.987 "superblock": true, 01:30:29.987 "num_base_bdevs": 3, 01:30:29.987 "num_base_bdevs_discovered": 3, 01:30:29.987 "num_base_bdevs_operational": 3, 01:30:29.987 "process": { 01:30:29.987 "type": "rebuild", 01:30:29.987 "target": "spare", 01:30:29.987 "progress": { 01:30:29.987 "blocks": 69632, 01:30:29.987 "percent": 54 01:30:29.987 } 01:30:29.987 }, 01:30:29.987 "base_bdevs_list": [ 01:30:29.987 { 01:30:29.987 "name": "spare", 01:30:29.987 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:29.987 "is_configured": true, 01:30:29.987 "data_offset": 2048, 01:30:29.987 "data_size": 63488 01:30:29.987 }, 01:30:29.987 { 01:30:29.987 "name": "BaseBdev2", 01:30:29.987 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:29.987 "is_configured": true, 01:30:29.987 "data_offset": 2048, 01:30:29.987 "data_size": 63488 01:30:29.987 }, 01:30:29.987 { 01:30:29.987 "name": "BaseBdev3", 01:30:29.987 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:29.987 "is_configured": true, 01:30:29.987 "data_offset": 2048, 01:30:29.987 "data_size": 63488 01:30:29.987 } 01:30:29.987 ] 01:30:29.987 }' 01:30:29.987 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:30.244 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:30.244 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:30.244 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:30.244 05:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:31.180 "name": "raid_bdev1", 01:30:31.180 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:31.180 "strip_size_kb": 64, 01:30:31.180 "state": "online", 01:30:31.180 "raid_level": "raid5f", 01:30:31.180 "superblock": true, 01:30:31.180 "num_base_bdevs": 3, 01:30:31.180 "num_base_bdevs_discovered": 3, 01:30:31.180 "num_base_bdevs_operational": 3, 01:30:31.180 "process": { 01:30:31.180 "type": "rebuild", 01:30:31.180 "target": "spare", 01:30:31.180 "progress": { 01:30:31.180 "blocks": 92160, 01:30:31.180 "percent": 72 01:30:31.180 } 01:30:31.180 }, 01:30:31.180 "base_bdevs_list": [ 01:30:31.180 { 01:30:31.180 "name": "spare", 01:30:31.180 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:31.180 "is_configured": true, 01:30:31.180 "data_offset": 2048, 01:30:31.180 "data_size": 63488 01:30:31.180 }, 01:30:31.180 { 01:30:31.180 "name": "BaseBdev2", 01:30:31.180 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:31.180 "is_configured": true, 01:30:31.180 "data_offset": 2048, 01:30:31.180 "data_size": 63488 01:30:31.180 }, 01:30:31.180 { 01:30:31.180 "name": "BaseBdev3", 01:30:31.180 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:31.180 "is_configured": true, 01:30:31.180 "data_offset": 2048, 01:30:31.180 "data_size": 63488 01:30:31.180 } 01:30:31.180 ] 01:30:31.180 }' 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:31.180 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:31.438 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:31.438 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:31.438 05:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:32.369 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:32.370 "name": "raid_bdev1", 01:30:32.370 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:32.370 "strip_size_kb": 64, 01:30:32.370 "state": "online", 01:30:32.370 "raid_level": "raid5f", 01:30:32.370 "superblock": true, 01:30:32.370 "num_base_bdevs": 3, 01:30:32.370 "num_base_bdevs_discovered": 3, 01:30:32.370 "num_base_bdevs_operational": 3, 01:30:32.370 "process": { 01:30:32.370 "type": "rebuild", 01:30:32.370 "target": "spare", 01:30:32.370 "progress": { 01:30:32.370 "blocks": 116736, 01:30:32.370 "percent": 91 01:30:32.370 } 01:30:32.370 }, 01:30:32.370 "base_bdevs_list": [ 01:30:32.370 { 01:30:32.370 "name": "spare", 01:30:32.370 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:32.370 "is_configured": true, 01:30:32.370 "data_offset": 2048, 01:30:32.370 "data_size": 63488 01:30:32.370 }, 01:30:32.370 { 01:30:32.370 "name": "BaseBdev2", 01:30:32.370 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:32.370 "is_configured": true, 01:30:32.370 "data_offset": 2048, 01:30:32.370 "data_size": 63488 01:30:32.370 }, 01:30:32.370 { 01:30:32.370 "name": "BaseBdev3", 01:30:32.370 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:32.370 "is_configured": true, 01:30:32.370 "data_offset": 2048, 01:30:32.370 "data_size": 63488 01:30:32.370 } 01:30:32.370 ] 01:30:32.370 }' 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:32.370 05:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:32.627 05:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:32.627 05:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:30:32.885 [2024-12-09 05:25:24.316935] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:30:32.885 [2024-12-09 05:25:24.317022] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:30:32.885 [2024-12-09 05:25:24.317187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:33.454 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:33.712 "name": "raid_bdev1", 01:30:33.712 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:33.712 "strip_size_kb": 64, 01:30:33.712 "state": "online", 01:30:33.712 "raid_level": "raid5f", 01:30:33.712 "superblock": true, 01:30:33.712 "num_base_bdevs": 3, 01:30:33.712 "num_base_bdevs_discovered": 3, 01:30:33.712 "num_base_bdevs_operational": 3, 01:30:33.712 "base_bdevs_list": [ 01:30:33.712 { 01:30:33.712 "name": "spare", 01:30:33.712 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:33.712 "is_configured": true, 01:30:33.712 "data_offset": 2048, 01:30:33.712 "data_size": 63488 01:30:33.712 }, 01:30:33.712 { 01:30:33.712 "name": "BaseBdev2", 01:30:33.712 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:33.712 "is_configured": true, 01:30:33.712 "data_offset": 2048, 01:30:33.712 "data_size": 63488 01:30:33.712 }, 01:30:33.712 { 01:30:33.712 "name": "BaseBdev3", 01:30:33.712 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:33.712 "is_configured": true, 01:30:33.712 "data_offset": 2048, 01:30:33.712 "data_size": 63488 01:30:33.712 } 01:30:33.712 ] 01:30:33.712 }' 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:33.712 "name": "raid_bdev1", 01:30:33.712 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:33.712 "strip_size_kb": 64, 01:30:33.712 "state": "online", 01:30:33.712 "raid_level": "raid5f", 01:30:33.712 "superblock": true, 01:30:33.712 "num_base_bdevs": 3, 01:30:33.712 "num_base_bdevs_discovered": 3, 01:30:33.712 "num_base_bdevs_operational": 3, 01:30:33.712 "base_bdevs_list": [ 01:30:33.712 { 01:30:33.712 "name": "spare", 01:30:33.712 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:33.712 "is_configured": true, 01:30:33.712 "data_offset": 2048, 01:30:33.712 "data_size": 63488 01:30:33.712 }, 01:30:33.712 { 01:30:33.712 "name": "BaseBdev2", 01:30:33.712 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:33.712 "is_configured": true, 01:30:33.712 "data_offset": 2048, 01:30:33.712 "data_size": 63488 01:30:33.712 }, 01:30:33.712 { 01:30:33.712 "name": "BaseBdev3", 01:30:33.712 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:33.712 "is_configured": true, 01:30:33.712 "data_offset": 2048, 01:30:33.712 "data_size": 63488 01:30:33.712 } 01:30:33.712 ] 01:30:33.712 }' 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:33.712 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:33.970 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:33.971 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:33.971 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:33.971 "name": "raid_bdev1", 01:30:33.971 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:33.971 "strip_size_kb": 64, 01:30:33.971 "state": "online", 01:30:33.971 "raid_level": "raid5f", 01:30:33.971 "superblock": true, 01:30:33.971 "num_base_bdevs": 3, 01:30:33.971 "num_base_bdevs_discovered": 3, 01:30:33.971 "num_base_bdevs_operational": 3, 01:30:33.971 "base_bdevs_list": [ 01:30:33.971 { 01:30:33.971 "name": "spare", 01:30:33.971 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:33.971 "is_configured": true, 01:30:33.971 "data_offset": 2048, 01:30:33.971 "data_size": 63488 01:30:33.971 }, 01:30:33.971 { 01:30:33.971 "name": "BaseBdev2", 01:30:33.971 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:33.971 "is_configured": true, 01:30:33.971 "data_offset": 2048, 01:30:33.971 "data_size": 63488 01:30:33.971 }, 01:30:33.971 { 01:30:33.971 "name": "BaseBdev3", 01:30:33.971 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:33.971 "is_configured": true, 01:30:33.971 "data_offset": 2048, 01:30:33.971 "data_size": 63488 01:30:33.971 } 01:30:33.971 ] 01:30:33.971 }' 01:30:33.971 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:33.971 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:34.537 [2024-12-09 05:25:25.909249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:30:34.537 [2024-12-09 05:25:25.909510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:30:34.537 [2024-12-09 05:25:25.909662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:34.537 [2024-12-09 05:25:25.909793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:34.537 [2024-12-09 05:25:25.909820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:30:34.537 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:30:34.538 05:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:30:34.796 /dev/nbd0 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:30:34.796 1+0 records in 01:30:34.796 1+0 records out 01:30:34.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349463 s, 11.7 MB/s 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:30:34.796 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:30:35.055 /dev/nbd1 01:30:35.055 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:30:35.313 1+0 records in 01:30:35.313 1+0 records out 01:30:35.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437819 s, 9.4 MB/s 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:30:35.313 05:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:30:35.570 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:30:35.571 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:30:35.571 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.138 [2024-12-09 05:25:27.468230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:30:36.138 [2024-12-09 05:25:27.468304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:36.138 [2024-12-09 05:25:27.468392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:30:36.138 [2024-12-09 05:25:27.468419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:36.138 [2024-12-09 05:25:27.471589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:36.138 [2024-12-09 05:25:27.471650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:30:36.138 [2024-12-09 05:25:27.471794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:30:36.138 [2024-12-09 05:25:27.471865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:36.138 [2024-12-09 05:25:27.472095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:30:36.138 [2024-12-09 05:25:27.472252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:30:36.138 spare 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.138 [2024-12-09 05:25:27.572477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:30:36.138 [2024-12-09 05:25:27.572559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 01:30:36.138 [2024-12-09 05:25:27.573133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 01:30:36.138 [2024-12-09 05:25:27.578738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:30:36.138 [2024-12-09 05:25:27.578939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:30:36.138 [2024-12-09 05:25:27.579300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:36.138 "name": "raid_bdev1", 01:30:36.138 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:36.138 "strip_size_kb": 64, 01:30:36.138 "state": "online", 01:30:36.138 "raid_level": "raid5f", 01:30:36.138 "superblock": true, 01:30:36.138 "num_base_bdevs": 3, 01:30:36.138 "num_base_bdevs_discovered": 3, 01:30:36.138 "num_base_bdevs_operational": 3, 01:30:36.138 "base_bdevs_list": [ 01:30:36.138 { 01:30:36.138 "name": "spare", 01:30:36.138 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:36.138 "is_configured": true, 01:30:36.138 "data_offset": 2048, 01:30:36.138 "data_size": 63488 01:30:36.138 }, 01:30:36.138 { 01:30:36.138 "name": "BaseBdev2", 01:30:36.138 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:36.138 "is_configured": true, 01:30:36.138 "data_offset": 2048, 01:30:36.138 "data_size": 63488 01:30:36.138 }, 01:30:36.138 { 01:30:36.138 "name": "BaseBdev3", 01:30:36.138 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:36.138 "is_configured": true, 01:30:36.138 "data_offset": 2048, 01:30:36.138 "data_size": 63488 01:30:36.138 } 01:30:36.138 ] 01:30:36.138 }' 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:36.138 05:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:36.706 "name": "raid_bdev1", 01:30:36.706 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:36.706 "strip_size_kb": 64, 01:30:36.706 "state": "online", 01:30:36.706 "raid_level": "raid5f", 01:30:36.706 "superblock": true, 01:30:36.706 "num_base_bdevs": 3, 01:30:36.706 "num_base_bdevs_discovered": 3, 01:30:36.706 "num_base_bdevs_operational": 3, 01:30:36.706 "base_bdevs_list": [ 01:30:36.706 { 01:30:36.706 "name": "spare", 01:30:36.706 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:36.706 "is_configured": true, 01:30:36.706 "data_offset": 2048, 01:30:36.706 "data_size": 63488 01:30:36.706 }, 01:30:36.706 { 01:30:36.706 "name": "BaseBdev2", 01:30:36.706 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:36.706 "is_configured": true, 01:30:36.706 "data_offset": 2048, 01:30:36.706 "data_size": 63488 01:30:36.706 }, 01:30:36.706 { 01:30:36.706 "name": "BaseBdev3", 01:30:36.706 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:36.706 "is_configured": true, 01:30:36.706 "data_offset": 2048, 01:30:36.706 "data_size": 63488 01:30:36.706 } 01:30:36.706 ] 01:30:36.706 }' 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:30:36.706 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.965 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:30:36.965 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.966 [2024-12-09 05:25:28.349972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:36.966 "name": "raid_bdev1", 01:30:36.966 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:36.966 "strip_size_kb": 64, 01:30:36.966 "state": "online", 01:30:36.966 "raid_level": "raid5f", 01:30:36.966 "superblock": true, 01:30:36.966 "num_base_bdevs": 3, 01:30:36.966 "num_base_bdevs_discovered": 2, 01:30:36.966 "num_base_bdevs_operational": 2, 01:30:36.966 "base_bdevs_list": [ 01:30:36.966 { 01:30:36.966 "name": null, 01:30:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:36.966 "is_configured": false, 01:30:36.966 "data_offset": 0, 01:30:36.966 "data_size": 63488 01:30:36.966 }, 01:30:36.966 { 01:30:36.966 "name": "BaseBdev2", 01:30:36.966 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:36.966 "is_configured": true, 01:30:36.966 "data_offset": 2048, 01:30:36.966 "data_size": 63488 01:30:36.966 }, 01:30:36.966 { 01:30:36.966 "name": "BaseBdev3", 01:30:36.966 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:36.966 "is_configured": true, 01:30:36.966 "data_offset": 2048, 01:30:36.966 "data_size": 63488 01:30:36.966 } 01:30:36.966 ] 01:30:36.966 }' 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:36.966 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:37.534 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:30:37.534 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:37.534 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:37.534 [2024-12-09 05:25:28.934196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:37.534 [2024-12-09 05:25:28.934525] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:30:37.534 [2024-12-09 05:25:28.934555] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:30:37.534 [2024-12-09 05:25:28.934616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:37.534 [2024-12-09 05:25:28.949402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 01:30:37.534 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:37.534 05:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 01:30:37.534 [2024-12-09 05:25:28.956768] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:38.476 05:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:38.476 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:38.476 "name": "raid_bdev1", 01:30:38.476 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:38.476 "strip_size_kb": 64, 01:30:38.476 "state": "online", 01:30:38.476 "raid_level": "raid5f", 01:30:38.476 "superblock": true, 01:30:38.476 "num_base_bdevs": 3, 01:30:38.476 "num_base_bdevs_discovered": 3, 01:30:38.476 "num_base_bdevs_operational": 3, 01:30:38.476 "process": { 01:30:38.476 "type": "rebuild", 01:30:38.476 "target": "spare", 01:30:38.476 "progress": { 01:30:38.476 "blocks": 18432, 01:30:38.476 "percent": 14 01:30:38.476 } 01:30:38.476 }, 01:30:38.476 "base_bdevs_list": [ 01:30:38.476 { 01:30:38.476 "name": "spare", 01:30:38.476 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:38.476 "is_configured": true, 01:30:38.476 "data_offset": 2048, 01:30:38.476 "data_size": 63488 01:30:38.476 }, 01:30:38.476 { 01:30:38.476 "name": "BaseBdev2", 01:30:38.476 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:38.476 "is_configured": true, 01:30:38.476 "data_offset": 2048, 01:30:38.476 "data_size": 63488 01:30:38.476 }, 01:30:38.476 { 01:30:38.476 "name": "BaseBdev3", 01:30:38.476 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:38.476 "is_configured": true, 01:30:38.476 "data_offset": 2048, 01:30:38.476 "data_size": 63488 01:30:38.476 } 01:30:38.476 ] 01:30:38.476 }' 01:30:38.476 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:38.476 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:38.476 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:38.734 [2024-12-09 05:25:30.127958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:38.734 [2024-12-09 05:25:30.173672] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:30:38.734 [2024-12-09 05:25:30.173824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:38.734 [2024-12-09 05:25:30.173852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:38.734 [2024-12-09 05:25:30.173867] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:38.734 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:38.734 "name": "raid_bdev1", 01:30:38.734 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:38.734 "strip_size_kb": 64, 01:30:38.734 "state": "online", 01:30:38.734 "raid_level": "raid5f", 01:30:38.734 "superblock": true, 01:30:38.734 "num_base_bdevs": 3, 01:30:38.734 "num_base_bdevs_discovered": 2, 01:30:38.734 "num_base_bdevs_operational": 2, 01:30:38.734 "base_bdevs_list": [ 01:30:38.734 { 01:30:38.734 "name": null, 01:30:38.734 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:38.735 "is_configured": false, 01:30:38.735 "data_offset": 0, 01:30:38.735 "data_size": 63488 01:30:38.735 }, 01:30:38.735 { 01:30:38.735 "name": "BaseBdev2", 01:30:38.735 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:38.735 "is_configured": true, 01:30:38.735 "data_offset": 2048, 01:30:38.735 "data_size": 63488 01:30:38.735 }, 01:30:38.735 { 01:30:38.735 "name": "BaseBdev3", 01:30:38.735 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:38.735 "is_configured": true, 01:30:38.735 "data_offset": 2048, 01:30:38.735 "data_size": 63488 01:30:38.735 } 01:30:38.735 ] 01:30:38.735 }' 01:30:38.735 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:38.735 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:39.307 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:30:39.307 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:39.307 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:39.307 [2024-12-09 05:25:30.779509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:30:39.307 [2024-12-09 05:25:30.779598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:39.307 [2024-12-09 05:25:30.779636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 01:30:39.307 [2024-12-09 05:25:30.779660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:39.307 [2024-12-09 05:25:30.780505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:39.307 [2024-12-09 05:25:30.780553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:30:39.307 [2024-12-09 05:25:30.780727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:30:39.307 [2024-12-09 05:25:30.780764] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:30:39.307 [2024-12-09 05:25:30.780780] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:30:39.307 [2024-12-09 05:25:30.780815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:30:39.307 [2024-12-09 05:25:30.796443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 01:30:39.307 spare 01:30:39.307 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:39.307 05:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 01:30:39.307 [2024-12-09 05:25:30.804411] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:40.272 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:40.272 "name": "raid_bdev1", 01:30:40.272 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:40.272 "strip_size_kb": 64, 01:30:40.272 "state": "online", 01:30:40.272 "raid_level": "raid5f", 01:30:40.272 "superblock": true, 01:30:40.272 "num_base_bdevs": 3, 01:30:40.272 "num_base_bdevs_discovered": 3, 01:30:40.272 "num_base_bdevs_operational": 3, 01:30:40.272 "process": { 01:30:40.272 "type": "rebuild", 01:30:40.272 "target": "spare", 01:30:40.272 "progress": { 01:30:40.272 "blocks": 18432, 01:30:40.272 "percent": 14 01:30:40.272 } 01:30:40.272 }, 01:30:40.273 "base_bdevs_list": [ 01:30:40.273 { 01:30:40.273 "name": "spare", 01:30:40.273 "uuid": "598d7051-5b31-5c04-ae75-096dfd352f0a", 01:30:40.273 "is_configured": true, 01:30:40.273 "data_offset": 2048, 01:30:40.273 "data_size": 63488 01:30:40.273 }, 01:30:40.273 { 01:30:40.273 "name": "BaseBdev2", 01:30:40.273 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:40.273 "is_configured": true, 01:30:40.273 "data_offset": 2048, 01:30:40.273 "data_size": 63488 01:30:40.273 }, 01:30:40.273 { 01:30:40.273 "name": "BaseBdev3", 01:30:40.273 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:40.273 "is_configured": true, 01:30:40.273 "data_offset": 2048, 01:30:40.273 "data_size": 63488 01:30:40.273 } 01:30:40.273 ] 01:30:40.273 }' 01:30:40.273 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:40.530 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:30:40.530 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:40.530 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:30:40.530 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:30:40.530 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:40.530 05:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:40.530 [2024-12-09 05:25:31.978910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:40.530 [2024-12-09 05:25:32.023251] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:30:40.530 [2024-12-09 05:25:32.023425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:40.530 [2024-12-09 05:25:32.023457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:30:40.530 [2024-12-09 05:25:32.023471] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:40.530 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:40.531 "name": "raid_bdev1", 01:30:40.531 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:40.531 "strip_size_kb": 64, 01:30:40.531 "state": "online", 01:30:40.531 "raid_level": "raid5f", 01:30:40.531 "superblock": true, 01:30:40.531 "num_base_bdevs": 3, 01:30:40.531 "num_base_bdevs_discovered": 2, 01:30:40.531 "num_base_bdevs_operational": 2, 01:30:40.531 "base_bdevs_list": [ 01:30:40.531 { 01:30:40.531 "name": null, 01:30:40.531 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:40.531 "is_configured": false, 01:30:40.531 "data_offset": 0, 01:30:40.531 "data_size": 63488 01:30:40.531 }, 01:30:40.531 { 01:30:40.531 "name": "BaseBdev2", 01:30:40.531 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:40.531 "is_configured": true, 01:30:40.531 "data_offset": 2048, 01:30:40.531 "data_size": 63488 01:30:40.531 }, 01:30:40.531 { 01:30:40.531 "name": "BaseBdev3", 01:30:40.531 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:40.531 "is_configured": true, 01:30:40.531 "data_offset": 2048, 01:30:40.531 "data_size": 63488 01:30:40.531 } 01:30:40.531 ] 01:30:40.531 }' 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:40.531 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:41.094 "name": "raid_bdev1", 01:30:41.094 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:41.094 "strip_size_kb": 64, 01:30:41.094 "state": "online", 01:30:41.094 "raid_level": "raid5f", 01:30:41.094 "superblock": true, 01:30:41.094 "num_base_bdevs": 3, 01:30:41.094 "num_base_bdevs_discovered": 2, 01:30:41.094 "num_base_bdevs_operational": 2, 01:30:41.094 "base_bdevs_list": [ 01:30:41.094 { 01:30:41.094 "name": null, 01:30:41.094 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:41.094 "is_configured": false, 01:30:41.094 "data_offset": 0, 01:30:41.094 "data_size": 63488 01:30:41.094 }, 01:30:41.094 { 01:30:41.094 "name": "BaseBdev2", 01:30:41.094 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:41.094 "is_configured": true, 01:30:41.094 "data_offset": 2048, 01:30:41.094 "data_size": 63488 01:30:41.094 }, 01:30:41.094 { 01:30:41.094 "name": "BaseBdev3", 01:30:41.094 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:41.094 "is_configured": true, 01:30:41.094 "data_offset": 2048, 01:30:41.094 "data_size": 63488 01:30:41.094 } 01:30:41.094 ] 01:30:41.094 }' 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:41.094 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:41.352 [2024-12-09 05:25:32.761247] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:30:41.352 [2024-12-09 05:25:32.761341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:30:41.352 [2024-12-09 05:25:32.761403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 01:30:41.352 [2024-12-09 05:25:32.761422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:30:41.352 [2024-12-09 05:25:32.762094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:30:41.352 [2024-12-09 05:25:32.762154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:30:41.352 [2024-12-09 05:25:32.762321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:30:41.352 [2024-12-09 05:25:32.762341] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:30:41.352 [2024-12-09 05:25:32.762385] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:30:41.352 [2024-12-09 05:25:32.762398] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:30:41.352 BaseBdev1 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:41.352 05:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 01:30:42.284 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:42.284 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:42.284 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:42.284 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:42.284 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:42.285 "name": "raid_bdev1", 01:30:42.285 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:42.285 "strip_size_kb": 64, 01:30:42.285 "state": "online", 01:30:42.285 "raid_level": "raid5f", 01:30:42.285 "superblock": true, 01:30:42.285 "num_base_bdevs": 3, 01:30:42.285 "num_base_bdevs_discovered": 2, 01:30:42.285 "num_base_bdevs_operational": 2, 01:30:42.285 "base_bdevs_list": [ 01:30:42.285 { 01:30:42.285 "name": null, 01:30:42.285 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:42.285 "is_configured": false, 01:30:42.285 "data_offset": 0, 01:30:42.285 "data_size": 63488 01:30:42.285 }, 01:30:42.285 { 01:30:42.285 "name": "BaseBdev2", 01:30:42.285 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:42.285 "is_configured": true, 01:30:42.285 "data_offset": 2048, 01:30:42.285 "data_size": 63488 01:30:42.285 }, 01:30:42.285 { 01:30:42.285 "name": "BaseBdev3", 01:30:42.285 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:42.285 "is_configured": true, 01:30:42.285 "data_offset": 2048, 01:30:42.285 "data_size": 63488 01:30:42.285 } 01:30:42.285 ] 01:30:42.285 }' 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:42.285 05:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:42.854 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:42.854 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:42.854 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:42.854 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:42.854 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:42.855 "name": "raid_bdev1", 01:30:42.855 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:42.855 "strip_size_kb": 64, 01:30:42.855 "state": "online", 01:30:42.855 "raid_level": "raid5f", 01:30:42.855 "superblock": true, 01:30:42.855 "num_base_bdevs": 3, 01:30:42.855 "num_base_bdevs_discovered": 2, 01:30:42.855 "num_base_bdevs_operational": 2, 01:30:42.855 "base_bdevs_list": [ 01:30:42.855 { 01:30:42.855 "name": null, 01:30:42.855 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:42.855 "is_configured": false, 01:30:42.855 "data_offset": 0, 01:30:42.855 "data_size": 63488 01:30:42.855 }, 01:30:42.855 { 01:30:42.855 "name": "BaseBdev2", 01:30:42.855 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:42.855 "is_configured": true, 01:30:42.855 "data_offset": 2048, 01:30:42.855 "data_size": 63488 01:30:42.855 }, 01:30:42.855 { 01:30:42.855 "name": "BaseBdev3", 01:30:42.855 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:42.855 "is_configured": true, 01:30:42.855 "data_offset": 2048, 01:30:42.855 "data_size": 63488 01:30:42.855 } 01:30:42.855 ] 01:30:42.855 }' 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:42.855 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:43.113 [2024-12-09 05:25:34.486104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:30:43.113 [2024-12-09 05:25:34.486342] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:30:43.113 [2024-12-09 05:25:34.486401] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:30:43.113 request: 01:30:43.113 { 01:30:43.113 "base_bdev": "BaseBdev1", 01:30:43.113 "raid_bdev": "raid_bdev1", 01:30:43.113 "method": "bdev_raid_add_base_bdev", 01:30:43.113 "req_id": 1 01:30:43.113 } 01:30:43.113 Got JSON-RPC error response 01:30:43.113 response: 01:30:43.113 { 01:30:43.113 "code": -22, 01:30:43.113 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:30:43.113 } 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:30:43.113 05:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:44.050 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:44.050 "name": "raid_bdev1", 01:30:44.050 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:44.050 "strip_size_kb": 64, 01:30:44.050 "state": "online", 01:30:44.050 "raid_level": "raid5f", 01:30:44.050 "superblock": true, 01:30:44.050 "num_base_bdevs": 3, 01:30:44.050 "num_base_bdevs_discovered": 2, 01:30:44.050 "num_base_bdevs_operational": 2, 01:30:44.050 "base_bdevs_list": [ 01:30:44.050 { 01:30:44.050 "name": null, 01:30:44.050 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:44.050 "is_configured": false, 01:30:44.050 "data_offset": 0, 01:30:44.050 "data_size": 63488 01:30:44.050 }, 01:30:44.050 { 01:30:44.050 "name": "BaseBdev2", 01:30:44.050 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:44.050 "is_configured": true, 01:30:44.050 "data_offset": 2048, 01:30:44.050 "data_size": 63488 01:30:44.051 }, 01:30:44.051 { 01:30:44.051 "name": "BaseBdev3", 01:30:44.051 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:44.051 "is_configured": true, 01:30:44.051 "data_offset": 2048, 01:30:44.051 "data_size": 63488 01:30:44.051 } 01:30:44.051 ] 01:30:44.051 }' 01:30:44.051 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:44.051 05:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:30:44.616 "name": "raid_bdev1", 01:30:44.616 "uuid": "1b16a2e3-2bb4-479d-90f2-3c005c22347f", 01:30:44.616 "strip_size_kb": 64, 01:30:44.616 "state": "online", 01:30:44.616 "raid_level": "raid5f", 01:30:44.616 "superblock": true, 01:30:44.616 "num_base_bdevs": 3, 01:30:44.616 "num_base_bdevs_discovered": 2, 01:30:44.616 "num_base_bdevs_operational": 2, 01:30:44.616 "base_bdevs_list": [ 01:30:44.616 { 01:30:44.616 "name": null, 01:30:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:44.616 "is_configured": false, 01:30:44.616 "data_offset": 0, 01:30:44.616 "data_size": 63488 01:30:44.616 }, 01:30:44.616 { 01:30:44.616 "name": "BaseBdev2", 01:30:44.616 "uuid": "47cf66f6-811f-5b10-9559-3c47e559c3f2", 01:30:44.616 "is_configured": true, 01:30:44.616 "data_offset": 2048, 01:30:44.616 "data_size": 63488 01:30:44.616 }, 01:30:44.616 { 01:30:44.616 "name": "BaseBdev3", 01:30:44.616 "uuid": "bc642bec-f476-5b4d-9307-bac4c84599a9", 01:30:44.616 "is_configured": true, 01:30:44.616 "data_offset": 2048, 01:30:44.616 "data_size": 63488 01:30:44.616 } 01:30:44.616 ] 01:30:44.616 }' 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:30:44.616 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82332 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82332 ']' 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82332 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82332 01:30:44.617 killing process with pid 82332 01:30:44.617 Received shutdown signal, test time was about 60.000000 seconds 01:30:44.617 01:30:44.617 Latency(us) 01:30:44.617 [2024-12-09T05:25:36.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:30:44.617 [2024-12-09T05:25:36.234Z] =================================================================================================================== 01:30:44.617 [2024-12-09T05:25:36.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82332' 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82332 01:30:44.617 [2024-12-09 05:25:36.216069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:30:44.617 05:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82332 01:30:44.617 [2024-12-09 05:25:36.216214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:44.617 [2024-12-09 05:25:36.216327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:44.617 [2024-12-09 05:25:36.216346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:30:45.184 [2024-12-09 05:25:36.548095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:30:46.150 05:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 01:30:46.150 01:30:46.150 real 0m25.244s 01:30:46.150 user 0m33.773s 01:30:46.150 sys 0m2.696s 01:30:46.150 ************************************ 01:30:46.150 END TEST raid5f_rebuild_test_sb 01:30:46.150 05:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:46.150 05:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:46.150 ************************************ 01:30:46.150 05:25:37 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 01:30:46.150 05:25:37 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 01:30:46.150 05:25:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:30:46.150 05:25:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:46.150 05:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:30:46.150 ************************************ 01:30:46.150 START TEST raid5f_state_function_test 01:30:46.150 ************************************ 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 01:30:46.150 Process raid pid: 83098 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83098 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83098' 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83098 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83098 ']' 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:46.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:46.150 05:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:46.409 [2024-12-09 05:25:37.820563] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:46.409 [2024-12-09 05:25:37.820767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:30:46.409 [2024-12-09 05:25:38.001516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:46.666 [2024-12-09 05:25:38.145397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:46.925 [2024-12-09 05:25:38.382406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:46.925 [2024-12-09 05:25:38.382452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:47.490 [2024-12-09 05:25:38.837326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:30:47.490 [2024-12-09 05:25:38.837418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:30:47.490 [2024-12-09 05:25:38.837437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:30:47.490 [2024-12-09 05:25:38.837455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:30:47.490 [2024-12-09 05:25:38.837466] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:30:47.490 [2024-12-09 05:25:38.837480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:30:47.490 [2024-12-09 05:25:38.837490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:30:47.490 [2024-12-09 05:25:38.837504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:47.490 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:47.491 "name": "Existed_Raid", 01:30:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:47.491 "strip_size_kb": 64, 01:30:47.491 "state": "configuring", 01:30:47.491 "raid_level": "raid5f", 01:30:47.491 "superblock": false, 01:30:47.491 "num_base_bdevs": 4, 01:30:47.491 "num_base_bdevs_discovered": 0, 01:30:47.491 "num_base_bdevs_operational": 4, 01:30:47.491 "base_bdevs_list": [ 01:30:47.491 { 01:30:47.491 "name": "BaseBdev1", 01:30:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:47.491 "is_configured": false, 01:30:47.491 "data_offset": 0, 01:30:47.491 "data_size": 0 01:30:47.491 }, 01:30:47.491 { 01:30:47.491 "name": "BaseBdev2", 01:30:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:47.491 "is_configured": false, 01:30:47.491 "data_offset": 0, 01:30:47.491 "data_size": 0 01:30:47.491 }, 01:30:47.491 { 01:30:47.491 "name": "BaseBdev3", 01:30:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:47.491 "is_configured": false, 01:30:47.491 "data_offset": 0, 01:30:47.491 "data_size": 0 01:30:47.491 }, 01:30:47.491 { 01:30:47.491 "name": "BaseBdev4", 01:30:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:47.491 "is_configured": false, 01:30:47.491 "data_offset": 0, 01:30:47.491 "data_size": 0 01:30:47.491 } 01:30:47.491 ] 01:30:47.491 }' 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:47.491 05:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.057 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:30:48.057 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.057 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.058 [2024-12-09 05:25:39.389504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:30:48.058 [2024-12-09 05:25:39.389555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.058 [2024-12-09 05:25:39.397504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:30:48.058 [2024-12-09 05:25:39.397561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:30:48.058 [2024-12-09 05:25:39.397577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:30:48.058 [2024-12-09 05:25:39.397594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:30:48.058 [2024-12-09 05:25:39.397604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:30:48.058 [2024-12-09 05:25:39.397618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:30:48.058 [2024-12-09 05:25:39.397628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:30:48.058 [2024-12-09 05:25:39.397641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.058 [2024-12-09 05:25:39.447357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:30:48.058 BaseBdev1 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.058 [ 01:30:48.058 { 01:30:48.058 "name": "BaseBdev1", 01:30:48.058 "aliases": [ 01:30:48.058 "722bd686-f9b0-4531-9bee-4d16e0c011dc" 01:30:48.058 ], 01:30:48.058 "product_name": "Malloc disk", 01:30:48.058 "block_size": 512, 01:30:48.058 "num_blocks": 65536, 01:30:48.058 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:48.058 "assigned_rate_limits": { 01:30:48.058 "rw_ios_per_sec": 0, 01:30:48.058 "rw_mbytes_per_sec": 0, 01:30:48.058 "r_mbytes_per_sec": 0, 01:30:48.058 "w_mbytes_per_sec": 0 01:30:48.058 }, 01:30:48.058 "claimed": true, 01:30:48.058 "claim_type": "exclusive_write", 01:30:48.058 "zoned": false, 01:30:48.058 "supported_io_types": { 01:30:48.058 "read": true, 01:30:48.058 "write": true, 01:30:48.058 "unmap": true, 01:30:48.058 "flush": true, 01:30:48.058 "reset": true, 01:30:48.058 "nvme_admin": false, 01:30:48.058 "nvme_io": false, 01:30:48.058 "nvme_io_md": false, 01:30:48.058 "write_zeroes": true, 01:30:48.058 "zcopy": true, 01:30:48.058 "get_zone_info": false, 01:30:48.058 "zone_management": false, 01:30:48.058 "zone_append": false, 01:30:48.058 "compare": false, 01:30:48.058 "compare_and_write": false, 01:30:48.058 "abort": true, 01:30:48.058 "seek_hole": false, 01:30:48.058 "seek_data": false, 01:30:48.058 "copy": true, 01:30:48.058 "nvme_iov_md": false 01:30:48.058 }, 01:30:48.058 "memory_domains": [ 01:30:48.058 { 01:30:48.058 "dma_device_id": "system", 01:30:48.058 "dma_device_type": 1 01:30:48.058 }, 01:30:48.058 { 01:30:48.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:48.058 "dma_device_type": 2 01:30:48.058 } 01:30:48.058 ], 01:30:48.058 "driver_specific": {} 01:30:48.058 } 01:30:48.058 ] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:48.058 "name": "Existed_Raid", 01:30:48.058 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.058 "strip_size_kb": 64, 01:30:48.058 "state": "configuring", 01:30:48.058 "raid_level": "raid5f", 01:30:48.058 "superblock": false, 01:30:48.058 "num_base_bdevs": 4, 01:30:48.058 "num_base_bdevs_discovered": 1, 01:30:48.058 "num_base_bdevs_operational": 4, 01:30:48.058 "base_bdevs_list": [ 01:30:48.058 { 01:30:48.058 "name": "BaseBdev1", 01:30:48.058 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:48.058 "is_configured": true, 01:30:48.058 "data_offset": 0, 01:30:48.058 "data_size": 65536 01:30:48.058 }, 01:30:48.058 { 01:30:48.058 "name": "BaseBdev2", 01:30:48.058 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.058 "is_configured": false, 01:30:48.058 "data_offset": 0, 01:30:48.058 "data_size": 0 01:30:48.058 }, 01:30:48.058 { 01:30:48.058 "name": "BaseBdev3", 01:30:48.058 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.058 "is_configured": false, 01:30:48.058 "data_offset": 0, 01:30:48.058 "data_size": 0 01:30:48.058 }, 01:30:48.058 { 01:30:48.058 "name": "BaseBdev4", 01:30:48.058 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.058 "is_configured": false, 01:30:48.058 "data_offset": 0, 01:30:48.058 "data_size": 0 01:30:48.058 } 01:30:48.058 ] 01:30:48.058 }' 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:48.058 05:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.625 [2024-12-09 05:25:40.047723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:30:48.625 [2024-12-09 05:25:40.047818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.625 [2024-12-09 05:25:40.059795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:30:48.625 [2024-12-09 05:25:40.062514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:30:48.625 [2024-12-09 05:25:40.062701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:30:48.625 [2024-12-09 05:25:40.062844] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:30:48.625 [2024-12-09 05:25:40.062907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:30:48.625 [2024-12-09 05:25:40.063121] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:30:48.625 [2024-12-09 05:25:40.063181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:48.625 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:48.625 "name": "Existed_Raid", 01:30:48.625 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.625 "strip_size_kb": 64, 01:30:48.625 "state": "configuring", 01:30:48.625 "raid_level": "raid5f", 01:30:48.625 "superblock": false, 01:30:48.625 "num_base_bdevs": 4, 01:30:48.625 "num_base_bdevs_discovered": 1, 01:30:48.625 "num_base_bdevs_operational": 4, 01:30:48.625 "base_bdevs_list": [ 01:30:48.625 { 01:30:48.625 "name": "BaseBdev1", 01:30:48.625 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:48.625 "is_configured": true, 01:30:48.625 "data_offset": 0, 01:30:48.625 "data_size": 65536 01:30:48.625 }, 01:30:48.625 { 01:30:48.625 "name": "BaseBdev2", 01:30:48.625 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.625 "is_configured": false, 01:30:48.625 "data_offset": 0, 01:30:48.625 "data_size": 0 01:30:48.625 }, 01:30:48.625 { 01:30:48.625 "name": "BaseBdev3", 01:30:48.625 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.626 "is_configured": false, 01:30:48.626 "data_offset": 0, 01:30:48.626 "data_size": 0 01:30:48.626 }, 01:30:48.626 { 01:30:48.626 "name": "BaseBdev4", 01:30:48.626 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:48.626 "is_configured": false, 01:30:48.626 "data_offset": 0, 01:30:48.626 "data_size": 0 01:30:48.626 } 01:30:48.626 ] 01:30:48.626 }' 01:30:48.626 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:48.626 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.191 [2024-12-09 05:25:40.600583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:30:49.191 BaseBdev2 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.191 [ 01:30:49.191 { 01:30:49.191 "name": "BaseBdev2", 01:30:49.191 "aliases": [ 01:30:49.191 "994e41e8-1962-4dd6-976a-3b31066b1cc3" 01:30:49.191 ], 01:30:49.191 "product_name": "Malloc disk", 01:30:49.191 "block_size": 512, 01:30:49.191 "num_blocks": 65536, 01:30:49.191 "uuid": "994e41e8-1962-4dd6-976a-3b31066b1cc3", 01:30:49.191 "assigned_rate_limits": { 01:30:49.191 "rw_ios_per_sec": 0, 01:30:49.191 "rw_mbytes_per_sec": 0, 01:30:49.191 "r_mbytes_per_sec": 0, 01:30:49.191 "w_mbytes_per_sec": 0 01:30:49.191 }, 01:30:49.191 "claimed": true, 01:30:49.191 "claim_type": "exclusive_write", 01:30:49.191 "zoned": false, 01:30:49.191 "supported_io_types": { 01:30:49.191 "read": true, 01:30:49.191 "write": true, 01:30:49.191 "unmap": true, 01:30:49.191 "flush": true, 01:30:49.191 "reset": true, 01:30:49.191 "nvme_admin": false, 01:30:49.191 "nvme_io": false, 01:30:49.191 "nvme_io_md": false, 01:30:49.191 "write_zeroes": true, 01:30:49.191 "zcopy": true, 01:30:49.191 "get_zone_info": false, 01:30:49.191 "zone_management": false, 01:30:49.191 "zone_append": false, 01:30:49.191 "compare": false, 01:30:49.191 "compare_and_write": false, 01:30:49.191 "abort": true, 01:30:49.191 "seek_hole": false, 01:30:49.191 "seek_data": false, 01:30:49.191 "copy": true, 01:30:49.191 "nvme_iov_md": false 01:30:49.191 }, 01:30:49.191 "memory_domains": [ 01:30:49.191 { 01:30:49.191 "dma_device_id": "system", 01:30:49.191 "dma_device_type": 1 01:30:49.191 }, 01:30:49.191 { 01:30:49.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:49.191 "dma_device_type": 2 01:30:49.191 } 01:30:49.191 ], 01:30:49.191 "driver_specific": {} 01:30:49.191 } 01:30:49.191 ] 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:49.191 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.192 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.192 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:49.192 "name": "Existed_Raid", 01:30:49.192 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:49.192 "strip_size_kb": 64, 01:30:49.192 "state": "configuring", 01:30:49.192 "raid_level": "raid5f", 01:30:49.192 "superblock": false, 01:30:49.192 "num_base_bdevs": 4, 01:30:49.192 "num_base_bdevs_discovered": 2, 01:30:49.192 "num_base_bdevs_operational": 4, 01:30:49.192 "base_bdevs_list": [ 01:30:49.192 { 01:30:49.192 "name": "BaseBdev1", 01:30:49.192 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:49.192 "is_configured": true, 01:30:49.192 "data_offset": 0, 01:30:49.192 "data_size": 65536 01:30:49.192 }, 01:30:49.192 { 01:30:49.192 "name": "BaseBdev2", 01:30:49.192 "uuid": "994e41e8-1962-4dd6-976a-3b31066b1cc3", 01:30:49.192 "is_configured": true, 01:30:49.192 "data_offset": 0, 01:30:49.192 "data_size": 65536 01:30:49.192 }, 01:30:49.192 { 01:30:49.192 "name": "BaseBdev3", 01:30:49.192 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:49.192 "is_configured": false, 01:30:49.192 "data_offset": 0, 01:30:49.192 "data_size": 0 01:30:49.192 }, 01:30:49.192 { 01:30:49.192 "name": "BaseBdev4", 01:30:49.192 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:49.192 "is_configured": false, 01:30:49.192 "data_offset": 0, 01:30:49.192 "data_size": 0 01:30:49.192 } 01:30:49.192 ] 01:30:49.192 }' 01:30:49.192 05:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:49.192 05:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.765 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.766 [2024-12-09 05:25:41.176239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:30:49.766 BaseBdev3 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.766 [ 01:30:49.766 { 01:30:49.766 "name": "BaseBdev3", 01:30:49.766 "aliases": [ 01:30:49.766 "470a3b9e-7437-49de-affb-d7c85e5c4d50" 01:30:49.766 ], 01:30:49.766 "product_name": "Malloc disk", 01:30:49.766 "block_size": 512, 01:30:49.766 "num_blocks": 65536, 01:30:49.766 "uuid": "470a3b9e-7437-49de-affb-d7c85e5c4d50", 01:30:49.766 "assigned_rate_limits": { 01:30:49.766 "rw_ios_per_sec": 0, 01:30:49.766 "rw_mbytes_per_sec": 0, 01:30:49.766 "r_mbytes_per_sec": 0, 01:30:49.766 "w_mbytes_per_sec": 0 01:30:49.766 }, 01:30:49.766 "claimed": true, 01:30:49.766 "claim_type": "exclusive_write", 01:30:49.766 "zoned": false, 01:30:49.766 "supported_io_types": { 01:30:49.766 "read": true, 01:30:49.766 "write": true, 01:30:49.766 "unmap": true, 01:30:49.766 "flush": true, 01:30:49.766 "reset": true, 01:30:49.766 "nvme_admin": false, 01:30:49.766 "nvme_io": false, 01:30:49.766 "nvme_io_md": false, 01:30:49.766 "write_zeroes": true, 01:30:49.766 "zcopy": true, 01:30:49.766 "get_zone_info": false, 01:30:49.766 "zone_management": false, 01:30:49.766 "zone_append": false, 01:30:49.766 "compare": false, 01:30:49.766 "compare_and_write": false, 01:30:49.766 "abort": true, 01:30:49.766 "seek_hole": false, 01:30:49.766 "seek_data": false, 01:30:49.766 "copy": true, 01:30:49.766 "nvme_iov_md": false 01:30:49.766 }, 01:30:49.766 "memory_domains": [ 01:30:49.766 { 01:30:49.766 "dma_device_id": "system", 01:30:49.766 "dma_device_type": 1 01:30:49.766 }, 01:30:49.766 { 01:30:49.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:49.766 "dma_device_type": 2 01:30:49.766 } 01:30:49.766 ], 01:30:49.766 "driver_specific": {} 01:30:49.766 } 01:30:49.766 ] 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:49.766 "name": "Existed_Raid", 01:30:49.766 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:49.766 "strip_size_kb": 64, 01:30:49.766 "state": "configuring", 01:30:49.766 "raid_level": "raid5f", 01:30:49.766 "superblock": false, 01:30:49.766 "num_base_bdevs": 4, 01:30:49.766 "num_base_bdevs_discovered": 3, 01:30:49.766 "num_base_bdevs_operational": 4, 01:30:49.766 "base_bdevs_list": [ 01:30:49.766 { 01:30:49.766 "name": "BaseBdev1", 01:30:49.766 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:49.766 "is_configured": true, 01:30:49.766 "data_offset": 0, 01:30:49.766 "data_size": 65536 01:30:49.766 }, 01:30:49.766 { 01:30:49.766 "name": "BaseBdev2", 01:30:49.766 "uuid": "994e41e8-1962-4dd6-976a-3b31066b1cc3", 01:30:49.766 "is_configured": true, 01:30:49.766 "data_offset": 0, 01:30:49.766 "data_size": 65536 01:30:49.766 }, 01:30:49.766 { 01:30:49.766 "name": "BaseBdev3", 01:30:49.766 "uuid": "470a3b9e-7437-49de-affb-d7c85e5c4d50", 01:30:49.766 "is_configured": true, 01:30:49.766 "data_offset": 0, 01:30:49.766 "data_size": 65536 01:30:49.766 }, 01:30:49.766 { 01:30:49.766 "name": "BaseBdev4", 01:30:49.766 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:49.766 "is_configured": false, 01:30:49.766 "data_offset": 0, 01:30:49.766 "data_size": 0 01:30:49.766 } 01:30:49.766 ] 01:30:49.766 }' 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:49.766 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.334 [2024-12-09 05:25:41.725105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:30:50.334 [2024-12-09 05:25:41.725425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:30:50.334 [2024-12-09 05:25:41.725561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:30:50.334 [2024-12-09 05:25:41.726028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:30:50.334 [2024-12-09 05:25:41.732889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:30:50.334 [2024-12-09 05:25:41.733073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:30:50.334 [2024-12-09 05:25:41.733571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:50.334 BaseBdev4 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.334 [ 01:30:50.334 { 01:30:50.334 "name": "BaseBdev4", 01:30:50.334 "aliases": [ 01:30:50.334 "f2826595-4a37-47b6-b87e-b4900cafe3dc" 01:30:50.334 ], 01:30:50.334 "product_name": "Malloc disk", 01:30:50.334 "block_size": 512, 01:30:50.334 "num_blocks": 65536, 01:30:50.334 "uuid": "f2826595-4a37-47b6-b87e-b4900cafe3dc", 01:30:50.334 "assigned_rate_limits": { 01:30:50.334 "rw_ios_per_sec": 0, 01:30:50.334 "rw_mbytes_per_sec": 0, 01:30:50.334 "r_mbytes_per_sec": 0, 01:30:50.334 "w_mbytes_per_sec": 0 01:30:50.334 }, 01:30:50.334 "claimed": true, 01:30:50.334 "claim_type": "exclusive_write", 01:30:50.334 "zoned": false, 01:30:50.334 "supported_io_types": { 01:30:50.334 "read": true, 01:30:50.334 "write": true, 01:30:50.334 "unmap": true, 01:30:50.334 "flush": true, 01:30:50.334 "reset": true, 01:30:50.334 "nvme_admin": false, 01:30:50.334 "nvme_io": false, 01:30:50.334 "nvme_io_md": false, 01:30:50.334 "write_zeroes": true, 01:30:50.334 "zcopy": true, 01:30:50.334 "get_zone_info": false, 01:30:50.334 "zone_management": false, 01:30:50.334 "zone_append": false, 01:30:50.334 "compare": false, 01:30:50.334 "compare_and_write": false, 01:30:50.334 "abort": true, 01:30:50.334 "seek_hole": false, 01:30:50.334 "seek_data": false, 01:30:50.334 "copy": true, 01:30:50.334 "nvme_iov_md": false 01:30:50.334 }, 01:30:50.334 "memory_domains": [ 01:30:50.334 { 01:30:50.334 "dma_device_id": "system", 01:30:50.334 "dma_device_type": 1 01:30:50.334 }, 01:30:50.334 { 01:30:50.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:50.334 "dma_device_type": 2 01:30:50.334 } 01:30:50.334 ], 01:30:50.334 "driver_specific": {} 01:30:50.334 } 01:30:50.334 ] 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:50.334 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:50.335 "name": "Existed_Raid", 01:30:50.335 "uuid": "30059b0a-444c-4d6e-8808-9bb4d84bcfd9", 01:30:50.335 "strip_size_kb": 64, 01:30:50.335 "state": "online", 01:30:50.335 "raid_level": "raid5f", 01:30:50.335 "superblock": false, 01:30:50.335 "num_base_bdevs": 4, 01:30:50.335 "num_base_bdevs_discovered": 4, 01:30:50.335 "num_base_bdevs_operational": 4, 01:30:50.335 "base_bdevs_list": [ 01:30:50.335 { 01:30:50.335 "name": "BaseBdev1", 01:30:50.335 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:50.335 "is_configured": true, 01:30:50.335 "data_offset": 0, 01:30:50.335 "data_size": 65536 01:30:50.335 }, 01:30:50.335 { 01:30:50.335 "name": "BaseBdev2", 01:30:50.335 "uuid": "994e41e8-1962-4dd6-976a-3b31066b1cc3", 01:30:50.335 "is_configured": true, 01:30:50.335 "data_offset": 0, 01:30:50.335 "data_size": 65536 01:30:50.335 }, 01:30:50.335 { 01:30:50.335 "name": "BaseBdev3", 01:30:50.335 "uuid": "470a3b9e-7437-49de-affb-d7c85e5c4d50", 01:30:50.335 "is_configured": true, 01:30:50.335 "data_offset": 0, 01:30:50.335 "data_size": 65536 01:30:50.335 }, 01:30:50.335 { 01:30:50.335 "name": "BaseBdev4", 01:30:50.335 "uuid": "f2826595-4a37-47b6-b87e-b4900cafe3dc", 01:30:50.335 "is_configured": true, 01:30:50.335 "data_offset": 0, 01:30:50.335 "data_size": 65536 01:30:50.335 } 01:30:50.335 ] 01:30:50.335 }' 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:50.335 05:25:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:30:50.902 [2024-12-09 05:25:42.277702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:30:50.902 "name": "Existed_Raid", 01:30:50.902 "aliases": [ 01:30:50.902 "30059b0a-444c-4d6e-8808-9bb4d84bcfd9" 01:30:50.902 ], 01:30:50.902 "product_name": "Raid Volume", 01:30:50.902 "block_size": 512, 01:30:50.902 "num_blocks": 196608, 01:30:50.902 "uuid": "30059b0a-444c-4d6e-8808-9bb4d84bcfd9", 01:30:50.902 "assigned_rate_limits": { 01:30:50.902 "rw_ios_per_sec": 0, 01:30:50.902 "rw_mbytes_per_sec": 0, 01:30:50.902 "r_mbytes_per_sec": 0, 01:30:50.902 "w_mbytes_per_sec": 0 01:30:50.902 }, 01:30:50.902 "claimed": false, 01:30:50.902 "zoned": false, 01:30:50.902 "supported_io_types": { 01:30:50.902 "read": true, 01:30:50.902 "write": true, 01:30:50.902 "unmap": false, 01:30:50.902 "flush": false, 01:30:50.902 "reset": true, 01:30:50.902 "nvme_admin": false, 01:30:50.902 "nvme_io": false, 01:30:50.902 "nvme_io_md": false, 01:30:50.902 "write_zeroes": true, 01:30:50.902 "zcopy": false, 01:30:50.902 "get_zone_info": false, 01:30:50.902 "zone_management": false, 01:30:50.902 "zone_append": false, 01:30:50.902 "compare": false, 01:30:50.902 "compare_and_write": false, 01:30:50.902 "abort": false, 01:30:50.902 "seek_hole": false, 01:30:50.902 "seek_data": false, 01:30:50.902 "copy": false, 01:30:50.902 "nvme_iov_md": false 01:30:50.902 }, 01:30:50.902 "driver_specific": { 01:30:50.902 "raid": { 01:30:50.902 "uuid": "30059b0a-444c-4d6e-8808-9bb4d84bcfd9", 01:30:50.902 "strip_size_kb": 64, 01:30:50.902 "state": "online", 01:30:50.902 "raid_level": "raid5f", 01:30:50.902 "superblock": false, 01:30:50.902 "num_base_bdevs": 4, 01:30:50.902 "num_base_bdevs_discovered": 4, 01:30:50.902 "num_base_bdevs_operational": 4, 01:30:50.902 "base_bdevs_list": [ 01:30:50.902 { 01:30:50.902 "name": "BaseBdev1", 01:30:50.902 "uuid": "722bd686-f9b0-4531-9bee-4d16e0c011dc", 01:30:50.902 "is_configured": true, 01:30:50.902 "data_offset": 0, 01:30:50.902 "data_size": 65536 01:30:50.902 }, 01:30:50.902 { 01:30:50.902 "name": "BaseBdev2", 01:30:50.902 "uuid": "994e41e8-1962-4dd6-976a-3b31066b1cc3", 01:30:50.902 "is_configured": true, 01:30:50.902 "data_offset": 0, 01:30:50.902 "data_size": 65536 01:30:50.902 }, 01:30:50.902 { 01:30:50.902 "name": "BaseBdev3", 01:30:50.902 "uuid": "470a3b9e-7437-49de-affb-d7c85e5c4d50", 01:30:50.902 "is_configured": true, 01:30:50.902 "data_offset": 0, 01:30:50.902 "data_size": 65536 01:30:50.902 }, 01:30:50.902 { 01:30:50.902 "name": "BaseBdev4", 01:30:50.902 "uuid": "f2826595-4a37-47b6-b87e-b4900cafe3dc", 01:30:50.902 "is_configured": true, 01:30:50.902 "data_offset": 0, 01:30:50.902 "data_size": 65536 01:30:50.902 } 01:30:50.902 ] 01:30:50.902 } 01:30:50.902 } 01:30:50.902 }' 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:30:50.902 BaseBdev2 01:30:50.902 BaseBdev3 01:30:50.902 BaseBdev4' 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:50.902 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:50.903 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.162 [2024-12-09 05:25:42.637678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.162 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.421 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:51.421 "name": "Existed_Raid", 01:30:51.421 "uuid": "30059b0a-444c-4d6e-8808-9bb4d84bcfd9", 01:30:51.421 "strip_size_kb": 64, 01:30:51.421 "state": "online", 01:30:51.421 "raid_level": "raid5f", 01:30:51.421 "superblock": false, 01:30:51.421 "num_base_bdevs": 4, 01:30:51.421 "num_base_bdevs_discovered": 3, 01:30:51.421 "num_base_bdevs_operational": 3, 01:30:51.421 "base_bdevs_list": [ 01:30:51.421 { 01:30:51.421 "name": null, 01:30:51.421 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:51.421 "is_configured": false, 01:30:51.421 "data_offset": 0, 01:30:51.421 "data_size": 65536 01:30:51.421 }, 01:30:51.421 { 01:30:51.421 "name": "BaseBdev2", 01:30:51.421 "uuid": "994e41e8-1962-4dd6-976a-3b31066b1cc3", 01:30:51.421 "is_configured": true, 01:30:51.421 "data_offset": 0, 01:30:51.421 "data_size": 65536 01:30:51.421 }, 01:30:51.421 { 01:30:51.421 "name": "BaseBdev3", 01:30:51.421 "uuid": "470a3b9e-7437-49de-affb-d7c85e5c4d50", 01:30:51.421 "is_configured": true, 01:30:51.421 "data_offset": 0, 01:30:51.421 "data_size": 65536 01:30:51.421 }, 01:30:51.421 { 01:30:51.421 "name": "BaseBdev4", 01:30:51.421 "uuid": "f2826595-4a37-47b6-b87e-b4900cafe3dc", 01:30:51.421 "is_configured": true, 01:30:51.421 "data_offset": 0, 01:30:51.421 "data_size": 65536 01:30:51.421 } 01:30:51.421 ] 01:30:51.421 }' 01:30:51.421 05:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:51.421 05:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:30:51.679 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.938 [2024-12-09 05:25:43.316093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:30:51.938 [2024-12-09 05:25:43.316280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:30:51.938 [2024-12-09 05:25:43.409498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:30:51.938 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:51.939 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:30:51.939 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:30:51.939 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:30:51.939 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:51.939 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:51.939 [2024-12-09 05:25:43.477583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.197 [2024-12-09 05:25:43.629110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:30:52.197 [2024-12-09 05:25:43.629189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.197 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.456 BaseBdev2 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.456 [ 01:30:52.456 { 01:30:52.456 "name": "BaseBdev2", 01:30:52.456 "aliases": [ 01:30:52.456 "6dad3da5-a165-452c-bc43-16e7602105cf" 01:30:52.456 ], 01:30:52.456 "product_name": "Malloc disk", 01:30:52.456 "block_size": 512, 01:30:52.456 "num_blocks": 65536, 01:30:52.456 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:52.456 "assigned_rate_limits": { 01:30:52.456 "rw_ios_per_sec": 0, 01:30:52.456 "rw_mbytes_per_sec": 0, 01:30:52.456 "r_mbytes_per_sec": 0, 01:30:52.456 "w_mbytes_per_sec": 0 01:30:52.456 }, 01:30:52.456 "claimed": false, 01:30:52.456 "zoned": false, 01:30:52.456 "supported_io_types": { 01:30:52.456 "read": true, 01:30:52.456 "write": true, 01:30:52.456 "unmap": true, 01:30:52.456 "flush": true, 01:30:52.456 "reset": true, 01:30:52.456 "nvme_admin": false, 01:30:52.456 "nvme_io": false, 01:30:52.456 "nvme_io_md": false, 01:30:52.456 "write_zeroes": true, 01:30:52.456 "zcopy": true, 01:30:52.456 "get_zone_info": false, 01:30:52.456 "zone_management": false, 01:30:52.456 "zone_append": false, 01:30:52.456 "compare": false, 01:30:52.456 "compare_and_write": false, 01:30:52.456 "abort": true, 01:30:52.456 "seek_hole": false, 01:30:52.456 "seek_data": false, 01:30:52.456 "copy": true, 01:30:52.456 "nvme_iov_md": false 01:30:52.456 }, 01:30:52.456 "memory_domains": [ 01:30:52.456 { 01:30:52.456 "dma_device_id": "system", 01:30:52.456 "dma_device_type": 1 01:30:52.456 }, 01:30:52.456 { 01:30:52.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:52.456 "dma_device_type": 2 01:30:52.456 } 01:30:52.456 ], 01:30:52.456 "driver_specific": {} 01:30:52.456 } 01:30:52.456 ] 01:30:52.456 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 BaseBdev3 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 [ 01:30:52.457 { 01:30:52.457 "name": "BaseBdev3", 01:30:52.457 "aliases": [ 01:30:52.457 "b2e70c12-4029-4073-8a27-b418fe5f9475" 01:30:52.457 ], 01:30:52.457 "product_name": "Malloc disk", 01:30:52.457 "block_size": 512, 01:30:52.457 "num_blocks": 65536, 01:30:52.457 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:52.457 "assigned_rate_limits": { 01:30:52.457 "rw_ios_per_sec": 0, 01:30:52.457 "rw_mbytes_per_sec": 0, 01:30:52.457 "r_mbytes_per_sec": 0, 01:30:52.457 "w_mbytes_per_sec": 0 01:30:52.457 }, 01:30:52.457 "claimed": false, 01:30:52.457 "zoned": false, 01:30:52.457 "supported_io_types": { 01:30:52.457 "read": true, 01:30:52.457 "write": true, 01:30:52.457 "unmap": true, 01:30:52.457 "flush": true, 01:30:52.457 "reset": true, 01:30:52.457 "nvme_admin": false, 01:30:52.457 "nvme_io": false, 01:30:52.457 "nvme_io_md": false, 01:30:52.457 "write_zeroes": true, 01:30:52.457 "zcopy": true, 01:30:52.457 "get_zone_info": false, 01:30:52.457 "zone_management": false, 01:30:52.457 "zone_append": false, 01:30:52.457 "compare": false, 01:30:52.457 "compare_and_write": false, 01:30:52.457 "abort": true, 01:30:52.457 "seek_hole": false, 01:30:52.457 "seek_data": false, 01:30:52.457 "copy": true, 01:30:52.457 "nvme_iov_md": false 01:30:52.457 }, 01:30:52.457 "memory_domains": [ 01:30:52.457 { 01:30:52.457 "dma_device_id": "system", 01:30:52.457 "dma_device_type": 1 01:30:52.457 }, 01:30:52.457 { 01:30:52.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:52.457 "dma_device_type": 2 01:30:52.457 } 01:30:52.457 ], 01:30:52.457 "driver_specific": {} 01:30:52.457 } 01:30:52.457 ] 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 BaseBdev4 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 [ 01:30:52.457 { 01:30:52.457 "name": "BaseBdev4", 01:30:52.457 "aliases": [ 01:30:52.457 "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9" 01:30:52.457 ], 01:30:52.457 "product_name": "Malloc disk", 01:30:52.457 "block_size": 512, 01:30:52.457 "num_blocks": 65536, 01:30:52.457 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:52.457 "assigned_rate_limits": { 01:30:52.457 "rw_ios_per_sec": 0, 01:30:52.457 "rw_mbytes_per_sec": 0, 01:30:52.457 "r_mbytes_per_sec": 0, 01:30:52.457 "w_mbytes_per_sec": 0 01:30:52.457 }, 01:30:52.457 "claimed": false, 01:30:52.457 "zoned": false, 01:30:52.457 "supported_io_types": { 01:30:52.457 "read": true, 01:30:52.457 "write": true, 01:30:52.457 "unmap": true, 01:30:52.457 "flush": true, 01:30:52.457 "reset": true, 01:30:52.457 "nvme_admin": false, 01:30:52.457 "nvme_io": false, 01:30:52.457 "nvme_io_md": false, 01:30:52.457 "write_zeroes": true, 01:30:52.457 "zcopy": true, 01:30:52.457 "get_zone_info": false, 01:30:52.457 "zone_management": false, 01:30:52.457 "zone_append": false, 01:30:52.457 "compare": false, 01:30:52.457 "compare_and_write": false, 01:30:52.457 "abort": true, 01:30:52.457 "seek_hole": false, 01:30:52.457 "seek_data": false, 01:30:52.457 "copy": true, 01:30:52.457 "nvme_iov_md": false 01:30:52.457 }, 01:30:52.457 "memory_domains": [ 01:30:52.457 { 01:30:52.457 "dma_device_id": "system", 01:30:52.457 "dma_device_type": 1 01:30:52.457 }, 01:30:52.457 { 01:30:52.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:52.457 "dma_device_type": 2 01:30:52.457 } 01:30:52.457 ], 01:30:52.457 "driver_specific": {} 01:30:52.457 } 01:30:52.457 ] 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 [2024-12-09 05:25:44.040971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:30:52.457 [2024-12-09 05:25:44.041028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:30:52.457 [2024-12-09 05:25:44.041062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:30:52.457 [2024-12-09 05:25:44.043753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:30:52.457 [2024-12-09 05:25:44.043828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.457 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.716 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:52.716 "name": "Existed_Raid", 01:30:52.716 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:52.716 "strip_size_kb": 64, 01:30:52.716 "state": "configuring", 01:30:52.716 "raid_level": "raid5f", 01:30:52.716 "superblock": false, 01:30:52.716 "num_base_bdevs": 4, 01:30:52.716 "num_base_bdevs_discovered": 3, 01:30:52.716 "num_base_bdevs_operational": 4, 01:30:52.716 "base_bdevs_list": [ 01:30:52.716 { 01:30:52.716 "name": "BaseBdev1", 01:30:52.716 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:52.716 "is_configured": false, 01:30:52.716 "data_offset": 0, 01:30:52.716 "data_size": 0 01:30:52.716 }, 01:30:52.716 { 01:30:52.716 "name": "BaseBdev2", 01:30:52.716 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:52.716 "is_configured": true, 01:30:52.716 "data_offset": 0, 01:30:52.716 "data_size": 65536 01:30:52.716 }, 01:30:52.716 { 01:30:52.716 "name": "BaseBdev3", 01:30:52.716 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:52.716 "is_configured": true, 01:30:52.716 "data_offset": 0, 01:30:52.716 "data_size": 65536 01:30:52.716 }, 01:30:52.716 { 01:30:52.716 "name": "BaseBdev4", 01:30:52.716 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:52.716 "is_configured": true, 01:30:52.716 "data_offset": 0, 01:30:52.716 "data_size": 65536 01:30:52.716 } 01:30:52.716 ] 01:30:52.716 }' 01:30:52.716 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:52.716 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:52.974 [2024-12-09 05:25:44.573181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:52.974 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.233 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:53.233 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:53.233 "name": "Existed_Raid", 01:30:53.233 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:53.233 "strip_size_kb": 64, 01:30:53.233 "state": "configuring", 01:30:53.233 "raid_level": "raid5f", 01:30:53.233 "superblock": false, 01:30:53.233 "num_base_bdevs": 4, 01:30:53.233 "num_base_bdevs_discovered": 2, 01:30:53.233 "num_base_bdevs_operational": 4, 01:30:53.233 "base_bdevs_list": [ 01:30:53.233 { 01:30:53.233 "name": "BaseBdev1", 01:30:53.233 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:53.233 "is_configured": false, 01:30:53.233 "data_offset": 0, 01:30:53.233 "data_size": 0 01:30:53.233 }, 01:30:53.233 { 01:30:53.233 "name": null, 01:30:53.233 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:53.233 "is_configured": false, 01:30:53.233 "data_offset": 0, 01:30:53.233 "data_size": 65536 01:30:53.233 }, 01:30:53.233 { 01:30:53.233 "name": "BaseBdev3", 01:30:53.233 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:53.233 "is_configured": true, 01:30:53.233 "data_offset": 0, 01:30:53.233 "data_size": 65536 01:30:53.233 }, 01:30:53.233 { 01:30:53.233 "name": "BaseBdev4", 01:30:53.233 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:53.233 "is_configured": true, 01:30:53.233 "data_offset": 0, 01:30:53.233 "data_size": 65536 01:30:53.233 } 01:30:53.233 ] 01:30:53.234 }' 01:30:53.234 05:25:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:53.234 05:25:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.492 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:53.492 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:30:53.492 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:53.492 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.492 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.750 [2024-12-09 05:25:45.185078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:30:53.750 BaseBdev1 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:30:53.750 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.751 [ 01:30:53.751 { 01:30:53.751 "name": "BaseBdev1", 01:30:53.751 "aliases": [ 01:30:53.751 "3318b8b5-5ee9-4027-8282-e51aa8986d71" 01:30:53.751 ], 01:30:53.751 "product_name": "Malloc disk", 01:30:53.751 "block_size": 512, 01:30:53.751 "num_blocks": 65536, 01:30:53.751 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:53.751 "assigned_rate_limits": { 01:30:53.751 "rw_ios_per_sec": 0, 01:30:53.751 "rw_mbytes_per_sec": 0, 01:30:53.751 "r_mbytes_per_sec": 0, 01:30:53.751 "w_mbytes_per_sec": 0 01:30:53.751 }, 01:30:53.751 "claimed": true, 01:30:53.751 "claim_type": "exclusive_write", 01:30:53.751 "zoned": false, 01:30:53.751 "supported_io_types": { 01:30:53.751 "read": true, 01:30:53.751 "write": true, 01:30:53.751 "unmap": true, 01:30:53.751 "flush": true, 01:30:53.751 "reset": true, 01:30:53.751 "nvme_admin": false, 01:30:53.751 "nvme_io": false, 01:30:53.751 "nvme_io_md": false, 01:30:53.751 "write_zeroes": true, 01:30:53.751 "zcopy": true, 01:30:53.751 "get_zone_info": false, 01:30:53.751 "zone_management": false, 01:30:53.751 "zone_append": false, 01:30:53.751 "compare": false, 01:30:53.751 "compare_and_write": false, 01:30:53.751 "abort": true, 01:30:53.751 "seek_hole": false, 01:30:53.751 "seek_data": false, 01:30:53.751 "copy": true, 01:30:53.751 "nvme_iov_md": false 01:30:53.751 }, 01:30:53.751 "memory_domains": [ 01:30:53.751 { 01:30:53.751 "dma_device_id": "system", 01:30:53.751 "dma_device_type": 1 01:30:53.751 }, 01:30:53.751 { 01:30:53.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:53.751 "dma_device_type": 2 01:30:53.751 } 01:30:53.751 ], 01:30:53.751 "driver_specific": {} 01:30:53.751 } 01:30:53.751 ] 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:53.751 "name": "Existed_Raid", 01:30:53.751 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:53.751 "strip_size_kb": 64, 01:30:53.751 "state": "configuring", 01:30:53.751 "raid_level": "raid5f", 01:30:53.751 "superblock": false, 01:30:53.751 "num_base_bdevs": 4, 01:30:53.751 "num_base_bdevs_discovered": 3, 01:30:53.751 "num_base_bdevs_operational": 4, 01:30:53.751 "base_bdevs_list": [ 01:30:53.751 { 01:30:53.751 "name": "BaseBdev1", 01:30:53.751 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:53.751 "is_configured": true, 01:30:53.751 "data_offset": 0, 01:30:53.751 "data_size": 65536 01:30:53.751 }, 01:30:53.751 { 01:30:53.751 "name": null, 01:30:53.751 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:53.751 "is_configured": false, 01:30:53.751 "data_offset": 0, 01:30:53.751 "data_size": 65536 01:30:53.751 }, 01:30:53.751 { 01:30:53.751 "name": "BaseBdev3", 01:30:53.751 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:53.751 "is_configured": true, 01:30:53.751 "data_offset": 0, 01:30:53.751 "data_size": 65536 01:30:53.751 }, 01:30:53.751 { 01:30:53.751 "name": "BaseBdev4", 01:30:53.751 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:53.751 "is_configured": true, 01:30:53.751 "data_offset": 0, 01:30:53.751 "data_size": 65536 01:30:53.751 } 01:30:53.751 ] 01:30:53.751 }' 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:53.751 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.317 [2024-12-09 05:25:45.777292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:54.317 "name": "Existed_Raid", 01:30:54.317 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:54.317 "strip_size_kb": 64, 01:30:54.317 "state": "configuring", 01:30:54.317 "raid_level": "raid5f", 01:30:54.317 "superblock": false, 01:30:54.317 "num_base_bdevs": 4, 01:30:54.317 "num_base_bdevs_discovered": 2, 01:30:54.317 "num_base_bdevs_operational": 4, 01:30:54.317 "base_bdevs_list": [ 01:30:54.317 { 01:30:54.317 "name": "BaseBdev1", 01:30:54.317 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:54.317 "is_configured": true, 01:30:54.317 "data_offset": 0, 01:30:54.317 "data_size": 65536 01:30:54.317 }, 01:30:54.317 { 01:30:54.317 "name": null, 01:30:54.317 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:54.317 "is_configured": false, 01:30:54.317 "data_offset": 0, 01:30:54.317 "data_size": 65536 01:30:54.317 }, 01:30:54.317 { 01:30:54.317 "name": null, 01:30:54.317 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:54.317 "is_configured": false, 01:30:54.317 "data_offset": 0, 01:30:54.317 "data_size": 65536 01:30:54.317 }, 01:30:54.317 { 01:30:54.317 "name": "BaseBdev4", 01:30:54.317 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:54.317 "is_configured": true, 01:30:54.317 "data_offset": 0, 01:30:54.317 "data_size": 65536 01:30:54.317 } 01:30:54.317 ] 01:30:54.317 }' 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:54.317 05:25:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.883 [2024-12-09 05:25:46.349523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:54.883 "name": "Existed_Raid", 01:30:54.883 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:54.883 "strip_size_kb": 64, 01:30:54.883 "state": "configuring", 01:30:54.883 "raid_level": "raid5f", 01:30:54.883 "superblock": false, 01:30:54.883 "num_base_bdevs": 4, 01:30:54.883 "num_base_bdevs_discovered": 3, 01:30:54.883 "num_base_bdevs_operational": 4, 01:30:54.883 "base_bdevs_list": [ 01:30:54.883 { 01:30:54.883 "name": "BaseBdev1", 01:30:54.883 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:54.883 "is_configured": true, 01:30:54.883 "data_offset": 0, 01:30:54.883 "data_size": 65536 01:30:54.883 }, 01:30:54.883 { 01:30:54.883 "name": null, 01:30:54.883 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:54.883 "is_configured": false, 01:30:54.883 "data_offset": 0, 01:30:54.883 "data_size": 65536 01:30:54.883 }, 01:30:54.883 { 01:30:54.883 "name": "BaseBdev3", 01:30:54.883 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:54.883 "is_configured": true, 01:30:54.883 "data_offset": 0, 01:30:54.883 "data_size": 65536 01:30:54.883 }, 01:30:54.883 { 01:30:54.883 "name": "BaseBdev4", 01:30:54.883 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:54.883 "is_configured": true, 01:30:54.883 "data_offset": 0, 01:30:54.883 "data_size": 65536 01:30:54.883 } 01:30:54.883 ] 01:30:54.883 }' 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:54.883 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.450 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:55.451 05:25:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.451 [2024-12-09 05:25:46.941809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:55.451 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:55.709 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:55.709 "name": "Existed_Raid", 01:30:55.709 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:55.709 "strip_size_kb": 64, 01:30:55.709 "state": "configuring", 01:30:55.709 "raid_level": "raid5f", 01:30:55.709 "superblock": false, 01:30:55.709 "num_base_bdevs": 4, 01:30:55.709 "num_base_bdevs_discovered": 2, 01:30:55.709 "num_base_bdevs_operational": 4, 01:30:55.709 "base_bdevs_list": [ 01:30:55.709 { 01:30:55.709 "name": null, 01:30:55.709 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:55.709 "is_configured": false, 01:30:55.709 "data_offset": 0, 01:30:55.709 "data_size": 65536 01:30:55.709 }, 01:30:55.709 { 01:30:55.709 "name": null, 01:30:55.709 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:55.709 "is_configured": false, 01:30:55.709 "data_offset": 0, 01:30:55.709 "data_size": 65536 01:30:55.709 }, 01:30:55.709 { 01:30:55.709 "name": "BaseBdev3", 01:30:55.709 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:55.709 "is_configured": true, 01:30:55.709 "data_offset": 0, 01:30:55.709 "data_size": 65536 01:30:55.709 }, 01:30:55.709 { 01:30:55.709 "name": "BaseBdev4", 01:30:55.709 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:55.709 "is_configured": true, 01:30:55.709 "data_offset": 0, 01:30:55.709 "data_size": 65536 01:30:55.709 } 01:30:55.709 ] 01:30:55.709 }' 01:30:55.709 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:55.709 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.968 [2024-12-09 05:25:47.546099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:55.968 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.227 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:56.227 "name": "Existed_Raid", 01:30:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 01:30:56.227 "strip_size_kb": 64, 01:30:56.227 "state": "configuring", 01:30:56.227 "raid_level": "raid5f", 01:30:56.227 "superblock": false, 01:30:56.227 "num_base_bdevs": 4, 01:30:56.227 "num_base_bdevs_discovered": 3, 01:30:56.227 "num_base_bdevs_operational": 4, 01:30:56.227 "base_bdevs_list": [ 01:30:56.227 { 01:30:56.227 "name": null, 01:30:56.227 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:56.227 "is_configured": false, 01:30:56.227 "data_offset": 0, 01:30:56.227 "data_size": 65536 01:30:56.227 }, 01:30:56.227 { 01:30:56.227 "name": "BaseBdev2", 01:30:56.227 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:56.227 "is_configured": true, 01:30:56.227 "data_offset": 0, 01:30:56.227 "data_size": 65536 01:30:56.227 }, 01:30:56.227 { 01:30:56.227 "name": "BaseBdev3", 01:30:56.227 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:56.227 "is_configured": true, 01:30:56.227 "data_offset": 0, 01:30:56.227 "data_size": 65536 01:30:56.227 }, 01:30:56.227 { 01:30:56.227 "name": "BaseBdev4", 01:30:56.227 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:56.227 "is_configured": true, 01:30:56.227 "data_offset": 0, 01:30:56.227 "data_size": 65536 01:30:56.227 } 01:30:56.227 ] 01:30:56.227 }' 01:30:56.227 05:25:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:56.227 05:25:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.485 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:30:56.485 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:56.485 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:56.485 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.485 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3318b8b5-5ee9-4027-8282-e51aa8986d71 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.744 [2024-12-09 05:25:48.205920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:30:56.744 [2024-12-09 05:25:48.205986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:30:56.744 [2024-12-09 05:25:48.205999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:30:56.744 [2024-12-09 05:25:48.206378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:30:56.744 [2024-12-09 05:25:48.213987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:30:56.744 [2024-12-09 05:25:48.214034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:30:56.744 [2024-12-09 05:25:48.214435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:30:56.744 NewBaseBdev 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.744 [ 01:30:56.744 { 01:30:56.744 "name": "NewBaseBdev", 01:30:56.744 "aliases": [ 01:30:56.744 "3318b8b5-5ee9-4027-8282-e51aa8986d71" 01:30:56.744 ], 01:30:56.744 "product_name": "Malloc disk", 01:30:56.744 "block_size": 512, 01:30:56.744 "num_blocks": 65536, 01:30:56.744 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:56.744 "assigned_rate_limits": { 01:30:56.744 "rw_ios_per_sec": 0, 01:30:56.744 "rw_mbytes_per_sec": 0, 01:30:56.744 "r_mbytes_per_sec": 0, 01:30:56.744 "w_mbytes_per_sec": 0 01:30:56.744 }, 01:30:56.744 "claimed": true, 01:30:56.744 "claim_type": "exclusive_write", 01:30:56.744 "zoned": false, 01:30:56.744 "supported_io_types": { 01:30:56.744 "read": true, 01:30:56.744 "write": true, 01:30:56.744 "unmap": true, 01:30:56.744 "flush": true, 01:30:56.744 "reset": true, 01:30:56.744 "nvme_admin": false, 01:30:56.744 "nvme_io": false, 01:30:56.744 "nvme_io_md": false, 01:30:56.744 "write_zeroes": true, 01:30:56.744 "zcopy": true, 01:30:56.744 "get_zone_info": false, 01:30:56.744 "zone_management": false, 01:30:56.744 "zone_append": false, 01:30:56.744 "compare": false, 01:30:56.744 "compare_and_write": false, 01:30:56.744 "abort": true, 01:30:56.744 "seek_hole": false, 01:30:56.744 "seek_data": false, 01:30:56.744 "copy": true, 01:30:56.744 "nvme_iov_md": false 01:30:56.744 }, 01:30:56.744 "memory_domains": [ 01:30:56.744 { 01:30:56.744 "dma_device_id": "system", 01:30:56.744 "dma_device_type": 1 01:30:56.744 }, 01:30:56.744 { 01:30:56.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:30:56.744 "dma_device_type": 2 01:30:56.744 } 01:30:56.744 ], 01:30:56.744 "driver_specific": {} 01:30:56.744 } 01:30:56.744 ] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:56.744 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:30:56.744 "name": "Existed_Raid", 01:30:56.744 "uuid": "08195f81-338b-46b5-9dc1-b794670ccb17", 01:30:56.744 "strip_size_kb": 64, 01:30:56.744 "state": "online", 01:30:56.744 "raid_level": "raid5f", 01:30:56.744 "superblock": false, 01:30:56.744 "num_base_bdevs": 4, 01:30:56.744 "num_base_bdevs_discovered": 4, 01:30:56.744 "num_base_bdevs_operational": 4, 01:30:56.744 "base_bdevs_list": [ 01:30:56.744 { 01:30:56.744 "name": "NewBaseBdev", 01:30:56.744 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:56.744 "is_configured": true, 01:30:56.744 "data_offset": 0, 01:30:56.744 "data_size": 65536 01:30:56.744 }, 01:30:56.744 { 01:30:56.744 "name": "BaseBdev2", 01:30:56.744 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:56.744 "is_configured": true, 01:30:56.744 "data_offset": 0, 01:30:56.744 "data_size": 65536 01:30:56.744 }, 01:30:56.744 { 01:30:56.744 "name": "BaseBdev3", 01:30:56.744 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:56.744 "is_configured": true, 01:30:56.745 "data_offset": 0, 01:30:56.745 "data_size": 65536 01:30:56.745 }, 01:30:56.745 { 01:30:56.745 "name": "BaseBdev4", 01:30:56.745 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:56.745 "is_configured": true, 01:30:56.745 "data_offset": 0, 01:30:56.745 "data_size": 65536 01:30:56.745 } 01:30:56.745 ] 01:30:56.745 }' 01:30:56.745 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:30:56.745 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:30:57.312 [2024-12-09 05:25:48.770745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:57.312 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:30:57.312 "name": "Existed_Raid", 01:30:57.312 "aliases": [ 01:30:57.312 "08195f81-338b-46b5-9dc1-b794670ccb17" 01:30:57.312 ], 01:30:57.312 "product_name": "Raid Volume", 01:30:57.312 "block_size": 512, 01:30:57.312 "num_blocks": 196608, 01:30:57.312 "uuid": "08195f81-338b-46b5-9dc1-b794670ccb17", 01:30:57.312 "assigned_rate_limits": { 01:30:57.312 "rw_ios_per_sec": 0, 01:30:57.312 "rw_mbytes_per_sec": 0, 01:30:57.312 "r_mbytes_per_sec": 0, 01:30:57.312 "w_mbytes_per_sec": 0 01:30:57.312 }, 01:30:57.312 "claimed": false, 01:30:57.312 "zoned": false, 01:30:57.312 "supported_io_types": { 01:30:57.312 "read": true, 01:30:57.312 "write": true, 01:30:57.312 "unmap": false, 01:30:57.312 "flush": false, 01:30:57.312 "reset": true, 01:30:57.312 "nvme_admin": false, 01:30:57.312 "nvme_io": false, 01:30:57.312 "nvme_io_md": false, 01:30:57.312 "write_zeroes": true, 01:30:57.312 "zcopy": false, 01:30:57.312 "get_zone_info": false, 01:30:57.312 "zone_management": false, 01:30:57.312 "zone_append": false, 01:30:57.312 "compare": false, 01:30:57.313 "compare_and_write": false, 01:30:57.313 "abort": false, 01:30:57.313 "seek_hole": false, 01:30:57.313 "seek_data": false, 01:30:57.313 "copy": false, 01:30:57.313 "nvme_iov_md": false 01:30:57.313 }, 01:30:57.313 "driver_specific": { 01:30:57.313 "raid": { 01:30:57.313 "uuid": "08195f81-338b-46b5-9dc1-b794670ccb17", 01:30:57.313 "strip_size_kb": 64, 01:30:57.313 "state": "online", 01:30:57.313 "raid_level": "raid5f", 01:30:57.313 "superblock": false, 01:30:57.313 "num_base_bdevs": 4, 01:30:57.313 "num_base_bdevs_discovered": 4, 01:30:57.313 "num_base_bdevs_operational": 4, 01:30:57.313 "base_bdevs_list": [ 01:30:57.313 { 01:30:57.313 "name": "NewBaseBdev", 01:30:57.313 "uuid": "3318b8b5-5ee9-4027-8282-e51aa8986d71", 01:30:57.313 "is_configured": true, 01:30:57.313 "data_offset": 0, 01:30:57.313 "data_size": 65536 01:30:57.313 }, 01:30:57.313 { 01:30:57.313 "name": "BaseBdev2", 01:30:57.313 "uuid": "6dad3da5-a165-452c-bc43-16e7602105cf", 01:30:57.313 "is_configured": true, 01:30:57.313 "data_offset": 0, 01:30:57.313 "data_size": 65536 01:30:57.313 }, 01:30:57.313 { 01:30:57.313 "name": "BaseBdev3", 01:30:57.313 "uuid": "b2e70c12-4029-4073-8a27-b418fe5f9475", 01:30:57.313 "is_configured": true, 01:30:57.313 "data_offset": 0, 01:30:57.313 "data_size": 65536 01:30:57.313 }, 01:30:57.313 { 01:30:57.313 "name": "BaseBdev4", 01:30:57.313 "uuid": "a7a25d7a-6234-4eb7-8b03-9ad23dd780f9", 01:30:57.313 "is_configured": true, 01:30:57.313 "data_offset": 0, 01:30:57.313 "data_size": 65536 01:30:57.313 } 01:30:57.313 ] 01:30:57.313 } 01:30:57.313 } 01:30:57.313 }' 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:30:57.313 BaseBdev2 01:30:57.313 BaseBdev3 01:30:57.313 BaseBdev4' 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.313 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:57.572 05:25:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:57.572 [2024-12-09 05:25:49.154554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:30:57.572 [2024-12-09 05:25:49.154602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:30:57.572 [2024-12-09 05:25:49.154693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:30:57.572 [2024-12-09 05:25:49.155079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:30:57.572 [2024-12-09 05:25:49.155099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83098 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83098 ']' 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83098 01:30:57.572 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 01:30:57.573 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:57.573 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83098 01:30:57.831 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:57.831 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:57.831 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83098' 01:30:57.831 killing process with pid 83098 01:30:57.831 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83098 01:30:57.831 [2024-12-09 05:25:49.207073] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:30:57.831 05:25:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83098 01:30:58.091 [2024-12-09 05:25:49.518591] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 01:30:59.467 01:30:59.467 real 0m12.959s 01:30:59.467 user 0m21.319s 01:30:59.467 sys 0m1.883s 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:59.467 ************************************ 01:30:59.467 END TEST raid5f_state_function_test 01:30:59.467 ************************************ 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 01:30:59.467 05:25:50 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 01:30:59.467 05:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:30:59.467 05:25:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:59.467 05:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:30:59.467 ************************************ 01:30:59.467 START TEST raid5f_state_function_test_sb 01:30:59.467 ************************************ 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:30:59.467 Process raid pid: 83776 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83776 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83776' 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83776 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83776 ']' 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:59.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:59.467 05:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:30:59.467 [2024-12-09 05:25:50.815862] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:59.467 [2024-12-09 05:25:50.816295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:30:59.467 [2024-12-09 05:25:50.991927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:59.724 [2024-12-09 05:25:51.126488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:59.724 [2024-12-09 05:25:51.318461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:30:59.724 [2024-12-09 05:25:51.318784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:00.293 [2024-12-09 05:25:51.899594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:31:00.293 [2024-12-09 05:25:51.899802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:31:00.293 [2024-12-09 05:25:51.899833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:31:00.293 [2024-12-09 05:25:51.899852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:31:00.293 [2024-12-09 05:25:51.899862] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:31:00.293 [2024-12-09 05:25:51.899877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:31:00.293 [2024-12-09 05:25:51.899886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:31:00.293 [2024-12-09 05:25:51.899901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:00.293 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:00.557 "name": "Existed_Raid", 01:31:00.557 "uuid": "ed341941-0f2f-4449-8443-cb8681f3221d", 01:31:00.557 "strip_size_kb": 64, 01:31:00.557 "state": "configuring", 01:31:00.557 "raid_level": "raid5f", 01:31:00.557 "superblock": true, 01:31:00.557 "num_base_bdevs": 4, 01:31:00.557 "num_base_bdevs_discovered": 0, 01:31:00.557 "num_base_bdevs_operational": 4, 01:31:00.557 "base_bdevs_list": [ 01:31:00.557 { 01:31:00.557 "name": "BaseBdev1", 01:31:00.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:00.557 "is_configured": false, 01:31:00.557 "data_offset": 0, 01:31:00.557 "data_size": 0 01:31:00.557 }, 01:31:00.557 { 01:31:00.557 "name": "BaseBdev2", 01:31:00.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:00.557 "is_configured": false, 01:31:00.557 "data_offset": 0, 01:31:00.557 "data_size": 0 01:31:00.557 }, 01:31:00.557 { 01:31:00.557 "name": "BaseBdev3", 01:31:00.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:00.557 "is_configured": false, 01:31:00.557 "data_offset": 0, 01:31:00.557 "data_size": 0 01:31:00.557 }, 01:31:00.557 { 01:31:00.557 "name": "BaseBdev4", 01:31:00.557 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:00.557 "is_configured": false, 01:31:00.557 "data_offset": 0, 01:31:00.557 "data_size": 0 01:31:00.557 } 01:31:00.557 ] 01:31:00.557 }' 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:00.557 05:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:00.814 [2024-12-09 05:25:52.407688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:31:00.814 [2024-12-09 05:25:52.407871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:00.814 [2024-12-09 05:25:52.419693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:31:00.814 [2024-12-09 05:25:52.419959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:31:00.814 [2024-12-09 05:25:52.420091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:31:00.814 [2024-12-09 05:25:52.420215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:31:00.814 [2024-12-09 05:25:52.420331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:31:00.814 [2024-12-09 05:25:52.420420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:31:00.814 [2024-12-09 05:25:52.420595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:31:00.814 [2024-12-09 05:25:52.420677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:00.814 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.073 [2024-12-09 05:25:52.468908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:31:01.073 BaseBdev1 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.073 [ 01:31:01.073 { 01:31:01.073 "name": "BaseBdev1", 01:31:01.073 "aliases": [ 01:31:01.073 "061c275e-35ba-49d8-bb7f-f95a9971d8d4" 01:31:01.073 ], 01:31:01.073 "product_name": "Malloc disk", 01:31:01.073 "block_size": 512, 01:31:01.073 "num_blocks": 65536, 01:31:01.073 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:01.073 "assigned_rate_limits": { 01:31:01.073 "rw_ios_per_sec": 0, 01:31:01.073 "rw_mbytes_per_sec": 0, 01:31:01.073 "r_mbytes_per_sec": 0, 01:31:01.073 "w_mbytes_per_sec": 0 01:31:01.073 }, 01:31:01.073 "claimed": true, 01:31:01.073 "claim_type": "exclusive_write", 01:31:01.073 "zoned": false, 01:31:01.073 "supported_io_types": { 01:31:01.073 "read": true, 01:31:01.073 "write": true, 01:31:01.073 "unmap": true, 01:31:01.073 "flush": true, 01:31:01.073 "reset": true, 01:31:01.073 "nvme_admin": false, 01:31:01.073 "nvme_io": false, 01:31:01.073 "nvme_io_md": false, 01:31:01.073 "write_zeroes": true, 01:31:01.073 "zcopy": true, 01:31:01.073 "get_zone_info": false, 01:31:01.073 "zone_management": false, 01:31:01.073 "zone_append": false, 01:31:01.073 "compare": false, 01:31:01.073 "compare_and_write": false, 01:31:01.073 "abort": true, 01:31:01.073 "seek_hole": false, 01:31:01.073 "seek_data": false, 01:31:01.073 "copy": true, 01:31:01.073 "nvme_iov_md": false 01:31:01.073 }, 01:31:01.073 "memory_domains": [ 01:31:01.073 { 01:31:01.073 "dma_device_id": "system", 01:31:01.073 "dma_device_type": 1 01:31:01.073 }, 01:31:01.073 { 01:31:01.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:01.073 "dma_device_type": 2 01:31:01.073 } 01:31:01.073 ], 01:31:01.073 "driver_specific": {} 01:31:01.073 } 01:31:01.073 ] 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:01.073 "name": "Existed_Raid", 01:31:01.073 "uuid": "f52eb478-7f91-4796-b146-f976e5e4d6fa", 01:31:01.073 "strip_size_kb": 64, 01:31:01.073 "state": "configuring", 01:31:01.073 "raid_level": "raid5f", 01:31:01.073 "superblock": true, 01:31:01.073 "num_base_bdevs": 4, 01:31:01.073 "num_base_bdevs_discovered": 1, 01:31:01.073 "num_base_bdevs_operational": 4, 01:31:01.073 "base_bdevs_list": [ 01:31:01.073 { 01:31:01.073 "name": "BaseBdev1", 01:31:01.073 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:01.073 "is_configured": true, 01:31:01.073 "data_offset": 2048, 01:31:01.073 "data_size": 63488 01:31:01.073 }, 01:31:01.073 { 01:31:01.073 "name": "BaseBdev2", 01:31:01.073 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:01.073 "is_configured": false, 01:31:01.073 "data_offset": 0, 01:31:01.073 "data_size": 0 01:31:01.073 }, 01:31:01.073 { 01:31:01.073 "name": "BaseBdev3", 01:31:01.073 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:01.073 "is_configured": false, 01:31:01.073 "data_offset": 0, 01:31:01.073 "data_size": 0 01:31:01.073 }, 01:31:01.073 { 01:31:01.073 "name": "BaseBdev4", 01:31:01.073 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:01.073 "is_configured": false, 01:31:01.073 "data_offset": 0, 01:31:01.073 "data_size": 0 01:31:01.073 } 01:31:01.073 ] 01:31:01.073 }' 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:01.073 05:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.639 05:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.639 [2024-12-09 05:25:53.005111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:31:01.639 [2024-12-09 05:25:53.005172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.639 [2024-12-09 05:25:53.013181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:31:01.639 [2024-12-09 05:25:53.015762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:31:01.639 [2024-12-09 05:25:53.015843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:31:01.639 [2024-12-09 05:25:53.015874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 01:31:01.639 [2024-12-09 05:25:53.015889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 01:31:01.639 [2024-12-09 05:25:53.015899] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 01:31:01.639 [2024-12-09 05:25:53.015911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:01.639 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:01.639 "name": "Existed_Raid", 01:31:01.639 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:01.639 "strip_size_kb": 64, 01:31:01.639 "state": "configuring", 01:31:01.639 "raid_level": "raid5f", 01:31:01.639 "superblock": true, 01:31:01.639 "num_base_bdevs": 4, 01:31:01.639 "num_base_bdevs_discovered": 1, 01:31:01.639 "num_base_bdevs_operational": 4, 01:31:01.639 "base_bdevs_list": [ 01:31:01.639 { 01:31:01.639 "name": "BaseBdev1", 01:31:01.639 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:01.639 "is_configured": true, 01:31:01.639 "data_offset": 2048, 01:31:01.639 "data_size": 63488 01:31:01.639 }, 01:31:01.639 { 01:31:01.639 "name": "BaseBdev2", 01:31:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:01.639 "is_configured": false, 01:31:01.639 "data_offset": 0, 01:31:01.639 "data_size": 0 01:31:01.639 }, 01:31:01.639 { 01:31:01.639 "name": "BaseBdev3", 01:31:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:01.639 "is_configured": false, 01:31:01.639 "data_offset": 0, 01:31:01.639 "data_size": 0 01:31:01.639 }, 01:31:01.639 { 01:31:01.639 "name": "BaseBdev4", 01:31:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:01.640 "is_configured": false, 01:31:01.640 "data_offset": 0, 01:31:01.640 "data_size": 0 01:31:01.640 } 01:31:01.640 ] 01:31:01.640 }' 01:31:01.640 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:01.640 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.204 [2024-12-09 05:25:53.589559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:31:02.204 BaseBdev2 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.204 [ 01:31:02.204 { 01:31:02.204 "name": "BaseBdev2", 01:31:02.204 "aliases": [ 01:31:02.204 "c3ae8983-7ff6-46af-bb3b-9540f06796a7" 01:31:02.204 ], 01:31:02.204 "product_name": "Malloc disk", 01:31:02.204 "block_size": 512, 01:31:02.204 "num_blocks": 65536, 01:31:02.204 "uuid": "c3ae8983-7ff6-46af-bb3b-9540f06796a7", 01:31:02.204 "assigned_rate_limits": { 01:31:02.204 "rw_ios_per_sec": 0, 01:31:02.204 "rw_mbytes_per_sec": 0, 01:31:02.204 "r_mbytes_per_sec": 0, 01:31:02.204 "w_mbytes_per_sec": 0 01:31:02.204 }, 01:31:02.204 "claimed": true, 01:31:02.204 "claim_type": "exclusive_write", 01:31:02.204 "zoned": false, 01:31:02.204 "supported_io_types": { 01:31:02.204 "read": true, 01:31:02.204 "write": true, 01:31:02.204 "unmap": true, 01:31:02.204 "flush": true, 01:31:02.204 "reset": true, 01:31:02.204 "nvme_admin": false, 01:31:02.204 "nvme_io": false, 01:31:02.204 "nvme_io_md": false, 01:31:02.204 "write_zeroes": true, 01:31:02.204 "zcopy": true, 01:31:02.204 "get_zone_info": false, 01:31:02.204 "zone_management": false, 01:31:02.204 "zone_append": false, 01:31:02.204 "compare": false, 01:31:02.204 "compare_and_write": false, 01:31:02.204 "abort": true, 01:31:02.204 "seek_hole": false, 01:31:02.204 "seek_data": false, 01:31:02.204 "copy": true, 01:31:02.204 "nvme_iov_md": false 01:31:02.204 }, 01:31:02.204 "memory_domains": [ 01:31:02.204 { 01:31:02.204 "dma_device_id": "system", 01:31:02.204 "dma_device_type": 1 01:31:02.204 }, 01:31:02.204 { 01:31:02.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:02.204 "dma_device_type": 2 01:31:02.204 } 01:31:02.204 ], 01:31:02.204 "driver_specific": {} 01:31:02.204 } 01:31:02.204 ] 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:02.204 "name": "Existed_Raid", 01:31:02.204 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:02.204 "strip_size_kb": 64, 01:31:02.204 "state": "configuring", 01:31:02.204 "raid_level": "raid5f", 01:31:02.204 "superblock": true, 01:31:02.204 "num_base_bdevs": 4, 01:31:02.204 "num_base_bdevs_discovered": 2, 01:31:02.204 "num_base_bdevs_operational": 4, 01:31:02.204 "base_bdevs_list": [ 01:31:02.204 { 01:31:02.204 "name": "BaseBdev1", 01:31:02.204 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:02.204 "is_configured": true, 01:31:02.204 "data_offset": 2048, 01:31:02.204 "data_size": 63488 01:31:02.204 }, 01:31:02.204 { 01:31:02.204 "name": "BaseBdev2", 01:31:02.204 "uuid": "c3ae8983-7ff6-46af-bb3b-9540f06796a7", 01:31:02.204 "is_configured": true, 01:31:02.204 "data_offset": 2048, 01:31:02.204 "data_size": 63488 01:31:02.204 }, 01:31:02.204 { 01:31:02.204 "name": "BaseBdev3", 01:31:02.204 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:02.204 "is_configured": false, 01:31:02.204 "data_offset": 0, 01:31:02.204 "data_size": 0 01:31:02.204 }, 01:31:02.204 { 01:31:02.204 "name": "BaseBdev4", 01:31:02.204 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:02.204 "is_configured": false, 01:31:02.204 "data_offset": 0, 01:31:02.204 "data_size": 0 01:31:02.204 } 01:31:02.204 ] 01:31:02.204 }' 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:02.204 05:25:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.771 [2024-12-09 05:25:54.203816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:31:02.771 BaseBdev3 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.771 [ 01:31:02.771 { 01:31:02.771 "name": "BaseBdev3", 01:31:02.771 "aliases": [ 01:31:02.771 "08beb660-fa6a-4b94-bd8b-2d0a2911f52c" 01:31:02.771 ], 01:31:02.771 "product_name": "Malloc disk", 01:31:02.771 "block_size": 512, 01:31:02.771 "num_blocks": 65536, 01:31:02.771 "uuid": "08beb660-fa6a-4b94-bd8b-2d0a2911f52c", 01:31:02.771 "assigned_rate_limits": { 01:31:02.771 "rw_ios_per_sec": 0, 01:31:02.771 "rw_mbytes_per_sec": 0, 01:31:02.771 "r_mbytes_per_sec": 0, 01:31:02.771 "w_mbytes_per_sec": 0 01:31:02.771 }, 01:31:02.771 "claimed": true, 01:31:02.771 "claim_type": "exclusive_write", 01:31:02.771 "zoned": false, 01:31:02.771 "supported_io_types": { 01:31:02.771 "read": true, 01:31:02.771 "write": true, 01:31:02.771 "unmap": true, 01:31:02.771 "flush": true, 01:31:02.771 "reset": true, 01:31:02.771 "nvme_admin": false, 01:31:02.771 "nvme_io": false, 01:31:02.771 "nvme_io_md": false, 01:31:02.771 "write_zeroes": true, 01:31:02.771 "zcopy": true, 01:31:02.771 "get_zone_info": false, 01:31:02.771 "zone_management": false, 01:31:02.771 "zone_append": false, 01:31:02.771 "compare": false, 01:31:02.771 "compare_and_write": false, 01:31:02.771 "abort": true, 01:31:02.771 "seek_hole": false, 01:31:02.771 "seek_data": false, 01:31:02.771 "copy": true, 01:31:02.771 "nvme_iov_md": false 01:31:02.771 }, 01:31:02.771 "memory_domains": [ 01:31:02.771 { 01:31:02.771 "dma_device_id": "system", 01:31:02.771 "dma_device_type": 1 01:31:02.771 }, 01:31:02.771 { 01:31:02.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:02.771 "dma_device_type": 2 01:31:02.771 } 01:31:02.771 ], 01:31:02.771 "driver_specific": {} 01:31:02.771 } 01:31:02.771 ] 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:02.771 "name": "Existed_Raid", 01:31:02.771 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:02.771 "strip_size_kb": 64, 01:31:02.771 "state": "configuring", 01:31:02.771 "raid_level": "raid5f", 01:31:02.771 "superblock": true, 01:31:02.771 "num_base_bdevs": 4, 01:31:02.771 "num_base_bdevs_discovered": 3, 01:31:02.771 "num_base_bdevs_operational": 4, 01:31:02.771 "base_bdevs_list": [ 01:31:02.771 { 01:31:02.771 "name": "BaseBdev1", 01:31:02.771 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:02.771 "is_configured": true, 01:31:02.771 "data_offset": 2048, 01:31:02.771 "data_size": 63488 01:31:02.771 }, 01:31:02.771 { 01:31:02.771 "name": "BaseBdev2", 01:31:02.771 "uuid": "c3ae8983-7ff6-46af-bb3b-9540f06796a7", 01:31:02.771 "is_configured": true, 01:31:02.771 "data_offset": 2048, 01:31:02.771 "data_size": 63488 01:31:02.771 }, 01:31:02.771 { 01:31:02.771 "name": "BaseBdev3", 01:31:02.771 "uuid": "08beb660-fa6a-4b94-bd8b-2d0a2911f52c", 01:31:02.771 "is_configured": true, 01:31:02.771 "data_offset": 2048, 01:31:02.771 "data_size": 63488 01:31:02.771 }, 01:31:02.771 { 01:31:02.771 "name": "BaseBdev4", 01:31:02.771 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:02.771 "is_configured": false, 01:31:02.771 "data_offset": 0, 01:31:02.771 "data_size": 0 01:31:02.771 } 01:31:02.771 ] 01:31:02.771 }' 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:02.771 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.337 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.338 [2024-12-09 05:25:54.814735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:31:03.338 [2024-12-09 05:25:54.815019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:31:03.338 [2024-12-09 05:25:54.815036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:03.338 BaseBdev4 01:31:03.338 [2024-12-09 05:25:54.815330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.338 [2024-12-09 05:25:54.821474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:31:03.338 [2024-12-09 05:25:54.821669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:31:03.338 [2024-12-09 05:25:54.822012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.338 [ 01:31:03.338 { 01:31:03.338 "name": "BaseBdev4", 01:31:03.338 "aliases": [ 01:31:03.338 "fee43f38-8a03-48d9-aab8-fd9f84cf0dd5" 01:31:03.338 ], 01:31:03.338 "product_name": "Malloc disk", 01:31:03.338 "block_size": 512, 01:31:03.338 "num_blocks": 65536, 01:31:03.338 "uuid": "fee43f38-8a03-48d9-aab8-fd9f84cf0dd5", 01:31:03.338 "assigned_rate_limits": { 01:31:03.338 "rw_ios_per_sec": 0, 01:31:03.338 "rw_mbytes_per_sec": 0, 01:31:03.338 "r_mbytes_per_sec": 0, 01:31:03.338 "w_mbytes_per_sec": 0 01:31:03.338 }, 01:31:03.338 "claimed": true, 01:31:03.338 "claim_type": "exclusive_write", 01:31:03.338 "zoned": false, 01:31:03.338 "supported_io_types": { 01:31:03.338 "read": true, 01:31:03.338 "write": true, 01:31:03.338 "unmap": true, 01:31:03.338 "flush": true, 01:31:03.338 "reset": true, 01:31:03.338 "nvme_admin": false, 01:31:03.338 "nvme_io": false, 01:31:03.338 "nvme_io_md": false, 01:31:03.338 "write_zeroes": true, 01:31:03.338 "zcopy": true, 01:31:03.338 "get_zone_info": false, 01:31:03.338 "zone_management": false, 01:31:03.338 "zone_append": false, 01:31:03.338 "compare": false, 01:31:03.338 "compare_and_write": false, 01:31:03.338 "abort": true, 01:31:03.338 "seek_hole": false, 01:31:03.338 "seek_data": false, 01:31:03.338 "copy": true, 01:31:03.338 "nvme_iov_md": false 01:31:03.338 }, 01:31:03.338 "memory_domains": [ 01:31:03.338 { 01:31:03.338 "dma_device_id": "system", 01:31:03.338 "dma_device_type": 1 01:31:03.338 }, 01:31:03.338 { 01:31:03.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:03.338 "dma_device_type": 2 01:31:03.338 } 01:31:03.338 ], 01:31:03.338 "driver_specific": {} 01:31:03.338 } 01:31:03.338 ] 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:03.338 "name": "Existed_Raid", 01:31:03.338 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:03.338 "strip_size_kb": 64, 01:31:03.338 "state": "online", 01:31:03.338 "raid_level": "raid5f", 01:31:03.338 "superblock": true, 01:31:03.338 "num_base_bdevs": 4, 01:31:03.338 "num_base_bdevs_discovered": 4, 01:31:03.338 "num_base_bdevs_operational": 4, 01:31:03.338 "base_bdevs_list": [ 01:31:03.338 { 01:31:03.338 "name": "BaseBdev1", 01:31:03.338 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:03.338 "is_configured": true, 01:31:03.338 "data_offset": 2048, 01:31:03.338 "data_size": 63488 01:31:03.338 }, 01:31:03.338 { 01:31:03.338 "name": "BaseBdev2", 01:31:03.338 "uuid": "c3ae8983-7ff6-46af-bb3b-9540f06796a7", 01:31:03.338 "is_configured": true, 01:31:03.338 "data_offset": 2048, 01:31:03.338 "data_size": 63488 01:31:03.338 }, 01:31:03.338 { 01:31:03.338 "name": "BaseBdev3", 01:31:03.338 "uuid": "08beb660-fa6a-4b94-bd8b-2d0a2911f52c", 01:31:03.338 "is_configured": true, 01:31:03.338 "data_offset": 2048, 01:31:03.338 "data_size": 63488 01:31:03.338 }, 01:31:03.338 { 01:31:03.338 "name": "BaseBdev4", 01:31:03.338 "uuid": "fee43f38-8a03-48d9-aab8-fd9f84cf0dd5", 01:31:03.338 "is_configured": true, 01:31:03.338 "data_offset": 2048, 01:31:03.338 "data_size": 63488 01:31:03.338 } 01:31:03.338 ] 01:31:03.338 }' 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:03.338 05:25:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:03.906 [2024-12-09 05:25:55.372910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:31:03.906 "name": "Existed_Raid", 01:31:03.906 "aliases": [ 01:31:03.906 "ea87bb7e-a62e-49fa-a186-4f503b125ef4" 01:31:03.906 ], 01:31:03.906 "product_name": "Raid Volume", 01:31:03.906 "block_size": 512, 01:31:03.906 "num_blocks": 190464, 01:31:03.906 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:03.906 "assigned_rate_limits": { 01:31:03.906 "rw_ios_per_sec": 0, 01:31:03.906 "rw_mbytes_per_sec": 0, 01:31:03.906 "r_mbytes_per_sec": 0, 01:31:03.906 "w_mbytes_per_sec": 0 01:31:03.906 }, 01:31:03.906 "claimed": false, 01:31:03.906 "zoned": false, 01:31:03.906 "supported_io_types": { 01:31:03.906 "read": true, 01:31:03.906 "write": true, 01:31:03.906 "unmap": false, 01:31:03.906 "flush": false, 01:31:03.906 "reset": true, 01:31:03.906 "nvme_admin": false, 01:31:03.906 "nvme_io": false, 01:31:03.906 "nvme_io_md": false, 01:31:03.906 "write_zeroes": true, 01:31:03.906 "zcopy": false, 01:31:03.906 "get_zone_info": false, 01:31:03.906 "zone_management": false, 01:31:03.906 "zone_append": false, 01:31:03.906 "compare": false, 01:31:03.906 "compare_and_write": false, 01:31:03.906 "abort": false, 01:31:03.906 "seek_hole": false, 01:31:03.906 "seek_data": false, 01:31:03.906 "copy": false, 01:31:03.906 "nvme_iov_md": false 01:31:03.906 }, 01:31:03.906 "driver_specific": { 01:31:03.906 "raid": { 01:31:03.906 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:03.906 "strip_size_kb": 64, 01:31:03.906 "state": "online", 01:31:03.906 "raid_level": "raid5f", 01:31:03.906 "superblock": true, 01:31:03.906 "num_base_bdevs": 4, 01:31:03.906 "num_base_bdevs_discovered": 4, 01:31:03.906 "num_base_bdevs_operational": 4, 01:31:03.906 "base_bdevs_list": [ 01:31:03.906 { 01:31:03.906 "name": "BaseBdev1", 01:31:03.906 "uuid": "061c275e-35ba-49d8-bb7f-f95a9971d8d4", 01:31:03.906 "is_configured": true, 01:31:03.906 "data_offset": 2048, 01:31:03.906 "data_size": 63488 01:31:03.906 }, 01:31:03.906 { 01:31:03.906 "name": "BaseBdev2", 01:31:03.906 "uuid": "c3ae8983-7ff6-46af-bb3b-9540f06796a7", 01:31:03.906 "is_configured": true, 01:31:03.906 "data_offset": 2048, 01:31:03.906 "data_size": 63488 01:31:03.906 }, 01:31:03.906 { 01:31:03.906 "name": "BaseBdev3", 01:31:03.906 "uuid": "08beb660-fa6a-4b94-bd8b-2d0a2911f52c", 01:31:03.906 "is_configured": true, 01:31:03.906 "data_offset": 2048, 01:31:03.906 "data_size": 63488 01:31:03.906 }, 01:31:03.906 { 01:31:03.906 "name": "BaseBdev4", 01:31:03.906 "uuid": "fee43f38-8a03-48d9-aab8-fd9f84cf0dd5", 01:31:03.906 "is_configured": true, 01:31:03.906 "data_offset": 2048, 01:31:03.906 "data_size": 63488 01:31:03.906 } 01:31:03.906 ] 01:31:03.906 } 01:31:03.906 } 01:31:03.906 }' 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:31:03.906 BaseBdev2 01:31:03.906 BaseBdev3 01:31:03.906 BaseBdev4' 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:03.906 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:04.164 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:31:04.164 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.165 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.165 [2024-12-09 05:25:55.732791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:04.423 "name": "Existed_Raid", 01:31:04.423 "uuid": "ea87bb7e-a62e-49fa-a186-4f503b125ef4", 01:31:04.423 "strip_size_kb": 64, 01:31:04.423 "state": "online", 01:31:04.423 "raid_level": "raid5f", 01:31:04.423 "superblock": true, 01:31:04.423 "num_base_bdevs": 4, 01:31:04.423 "num_base_bdevs_discovered": 3, 01:31:04.423 "num_base_bdevs_operational": 3, 01:31:04.423 "base_bdevs_list": [ 01:31:04.423 { 01:31:04.423 "name": null, 01:31:04.423 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:04.423 "is_configured": false, 01:31:04.423 "data_offset": 0, 01:31:04.423 "data_size": 63488 01:31:04.423 }, 01:31:04.423 { 01:31:04.423 "name": "BaseBdev2", 01:31:04.423 "uuid": "c3ae8983-7ff6-46af-bb3b-9540f06796a7", 01:31:04.423 "is_configured": true, 01:31:04.423 "data_offset": 2048, 01:31:04.423 "data_size": 63488 01:31:04.423 }, 01:31:04.423 { 01:31:04.423 "name": "BaseBdev3", 01:31:04.423 "uuid": "08beb660-fa6a-4b94-bd8b-2d0a2911f52c", 01:31:04.423 "is_configured": true, 01:31:04.423 "data_offset": 2048, 01:31:04.423 "data_size": 63488 01:31:04.423 }, 01:31:04.423 { 01:31:04.423 "name": "BaseBdev4", 01:31:04.423 "uuid": "fee43f38-8a03-48d9-aab8-fd9f84cf0dd5", 01:31:04.423 "is_configured": true, 01:31:04.423 "data_offset": 2048, 01:31:04.423 "data_size": 63488 01:31:04.423 } 01:31:04.423 ] 01:31:04.423 }' 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:04.423 05:25:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.987 [2024-12-09 05:25:56.376132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:31:04.987 [2024-12-09 05:25:56.376346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:04.987 [2024-12-09 05:25:56.449567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.987 [2024-12-09 05:25:56.513622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:04.987 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.245 [2024-12-09 05:25:56.646646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 01:31:05.245 [2024-12-09 05:25:56.646895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.245 BaseBdev2 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.245 [ 01:31:05.245 { 01:31:05.245 "name": "BaseBdev2", 01:31:05.245 "aliases": [ 01:31:05.245 "bee11843-45ea-4f95-bcb0-ebd9630b5545" 01:31:05.245 ], 01:31:05.245 "product_name": "Malloc disk", 01:31:05.245 "block_size": 512, 01:31:05.245 "num_blocks": 65536, 01:31:05.245 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:05.245 "assigned_rate_limits": { 01:31:05.245 "rw_ios_per_sec": 0, 01:31:05.245 "rw_mbytes_per_sec": 0, 01:31:05.245 "r_mbytes_per_sec": 0, 01:31:05.245 "w_mbytes_per_sec": 0 01:31:05.245 }, 01:31:05.245 "claimed": false, 01:31:05.245 "zoned": false, 01:31:05.245 "supported_io_types": { 01:31:05.245 "read": true, 01:31:05.245 "write": true, 01:31:05.245 "unmap": true, 01:31:05.245 "flush": true, 01:31:05.245 "reset": true, 01:31:05.245 "nvme_admin": false, 01:31:05.245 "nvme_io": false, 01:31:05.245 "nvme_io_md": false, 01:31:05.245 "write_zeroes": true, 01:31:05.245 "zcopy": true, 01:31:05.245 "get_zone_info": false, 01:31:05.245 "zone_management": false, 01:31:05.245 "zone_append": false, 01:31:05.245 "compare": false, 01:31:05.245 "compare_and_write": false, 01:31:05.245 "abort": true, 01:31:05.245 "seek_hole": false, 01:31:05.245 "seek_data": false, 01:31:05.245 "copy": true, 01:31:05.245 "nvme_iov_md": false 01:31:05.245 }, 01:31:05.245 "memory_domains": [ 01:31:05.245 { 01:31:05.245 "dma_device_id": "system", 01:31:05.245 "dma_device_type": 1 01:31:05.245 }, 01:31:05.245 { 01:31:05.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:05.245 "dma_device_type": 2 01:31:05.245 } 01:31:05.245 ], 01:31:05.245 "driver_specific": {} 01:31:05.245 } 01:31:05.245 ] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.245 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.504 BaseBdev3 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.504 [ 01:31:05.504 { 01:31:05.504 "name": "BaseBdev3", 01:31:05.504 "aliases": [ 01:31:05.504 "91c8282d-ddce-4688-95dd-171821ae4097" 01:31:05.504 ], 01:31:05.504 "product_name": "Malloc disk", 01:31:05.504 "block_size": 512, 01:31:05.504 "num_blocks": 65536, 01:31:05.504 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:05.504 "assigned_rate_limits": { 01:31:05.504 "rw_ios_per_sec": 0, 01:31:05.504 "rw_mbytes_per_sec": 0, 01:31:05.504 "r_mbytes_per_sec": 0, 01:31:05.504 "w_mbytes_per_sec": 0 01:31:05.504 }, 01:31:05.504 "claimed": false, 01:31:05.504 "zoned": false, 01:31:05.504 "supported_io_types": { 01:31:05.504 "read": true, 01:31:05.504 "write": true, 01:31:05.504 "unmap": true, 01:31:05.504 "flush": true, 01:31:05.504 "reset": true, 01:31:05.504 "nvme_admin": false, 01:31:05.504 "nvme_io": false, 01:31:05.504 "nvme_io_md": false, 01:31:05.504 "write_zeroes": true, 01:31:05.504 "zcopy": true, 01:31:05.504 "get_zone_info": false, 01:31:05.504 "zone_management": false, 01:31:05.504 "zone_append": false, 01:31:05.504 "compare": false, 01:31:05.504 "compare_and_write": false, 01:31:05.504 "abort": true, 01:31:05.504 "seek_hole": false, 01:31:05.504 "seek_data": false, 01:31:05.504 "copy": true, 01:31:05.504 "nvme_iov_md": false 01:31:05.504 }, 01:31:05.504 "memory_domains": [ 01:31:05.504 { 01:31:05.504 "dma_device_id": "system", 01:31:05.504 "dma_device_type": 1 01:31:05.504 }, 01:31:05.504 { 01:31:05.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:05.504 "dma_device_type": 2 01:31:05.504 } 01:31:05.504 ], 01:31:05.504 "driver_specific": {} 01:31:05.504 } 01:31:05.504 ] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.504 BaseBdev4 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.504 [ 01:31:05.504 { 01:31:05.504 "name": "BaseBdev4", 01:31:05.504 "aliases": [ 01:31:05.504 "ee9289ee-7683-4edb-820d-257bc8ee0880" 01:31:05.504 ], 01:31:05.504 "product_name": "Malloc disk", 01:31:05.504 "block_size": 512, 01:31:05.504 "num_blocks": 65536, 01:31:05.504 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:05.504 "assigned_rate_limits": { 01:31:05.504 "rw_ios_per_sec": 0, 01:31:05.504 "rw_mbytes_per_sec": 0, 01:31:05.504 "r_mbytes_per_sec": 0, 01:31:05.504 "w_mbytes_per_sec": 0 01:31:05.504 }, 01:31:05.504 "claimed": false, 01:31:05.504 "zoned": false, 01:31:05.504 "supported_io_types": { 01:31:05.504 "read": true, 01:31:05.504 "write": true, 01:31:05.504 "unmap": true, 01:31:05.504 "flush": true, 01:31:05.504 "reset": true, 01:31:05.504 "nvme_admin": false, 01:31:05.504 "nvme_io": false, 01:31:05.504 "nvme_io_md": false, 01:31:05.504 "write_zeroes": true, 01:31:05.504 "zcopy": true, 01:31:05.504 "get_zone_info": false, 01:31:05.504 "zone_management": false, 01:31:05.504 "zone_append": false, 01:31:05.504 "compare": false, 01:31:05.504 "compare_and_write": false, 01:31:05.504 "abort": true, 01:31:05.504 "seek_hole": false, 01:31:05.504 "seek_data": false, 01:31:05.504 "copy": true, 01:31:05.504 "nvme_iov_md": false 01:31:05.504 }, 01:31:05.504 "memory_domains": [ 01:31:05.504 { 01:31:05.504 "dma_device_id": "system", 01:31:05.504 "dma_device_type": 1 01:31:05.504 }, 01:31:05.504 { 01:31:05.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:05.504 "dma_device_type": 2 01:31:05.504 } 01:31:05.504 ], 01:31:05.504 "driver_specific": {} 01:31:05.504 } 01:31:05.504 ] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:05.504 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.505 [2024-12-09 05:25:56.966268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:31:05.505 [2024-12-09 05:25:56.966323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:31:05.505 [2024-12-09 05:25:56.966403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:31:05.505 [2024-12-09 05:25:56.968662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:31:05.505 [2024-12-09 05:25:56.968748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:05.505 05:25:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.505 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:05.505 "name": "Existed_Raid", 01:31:05.505 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:05.505 "strip_size_kb": 64, 01:31:05.505 "state": "configuring", 01:31:05.505 "raid_level": "raid5f", 01:31:05.505 "superblock": true, 01:31:05.505 "num_base_bdevs": 4, 01:31:05.505 "num_base_bdevs_discovered": 3, 01:31:05.505 "num_base_bdevs_operational": 4, 01:31:05.505 "base_bdevs_list": [ 01:31:05.505 { 01:31:05.505 "name": "BaseBdev1", 01:31:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:05.505 "is_configured": false, 01:31:05.505 "data_offset": 0, 01:31:05.505 "data_size": 0 01:31:05.505 }, 01:31:05.505 { 01:31:05.505 "name": "BaseBdev2", 01:31:05.505 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:05.505 "is_configured": true, 01:31:05.505 "data_offset": 2048, 01:31:05.505 "data_size": 63488 01:31:05.505 }, 01:31:05.505 { 01:31:05.505 "name": "BaseBdev3", 01:31:05.505 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:05.505 "is_configured": true, 01:31:05.505 "data_offset": 2048, 01:31:05.505 "data_size": 63488 01:31:05.505 }, 01:31:05.505 { 01:31:05.505 "name": "BaseBdev4", 01:31:05.505 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:05.505 "is_configured": true, 01:31:05.505 "data_offset": 2048, 01:31:05.505 "data_size": 63488 01:31:05.505 } 01:31:05.505 ] 01:31:05.505 }' 01:31:05.505 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:05.505 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.072 [2024-12-09 05:25:57.506425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:06.072 "name": "Existed_Raid", 01:31:06.072 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:06.072 "strip_size_kb": 64, 01:31:06.072 "state": "configuring", 01:31:06.072 "raid_level": "raid5f", 01:31:06.072 "superblock": true, 01:31:06.072 "num_base_bdevs": 4, 01:31:06.072 "num_base_bdevs_discovered": 2, 01:31:06.072 "num_base_bdevs_operational": 4, 01:31:06.072 "base_bdevs_list": [ 01:31:06.072 { 01:31:06.072 "name": "BaseBdev1", 01:31:06.072 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:06.072 "is_configured": false, 01:31:06.072 "data_offset": 0, 01:31:06.072 "data_size": 0 01:31:06.072 }, 01:31:06.072 { 01:31:06.072 "name": null, 01:31:06.072 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:06.072 "is_configured": false, 01:31:06.072 "data_offset": 0, 01:31:06.072 "data_size": 63488 01:31:06.072 }, 01:31:06.072 { 01:31:06.072 "name": "BaseBdev3", 01:31:06.072 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:06.072 "is_configured": true, 01:31:06.072 "data_offset": 2048, 01:31:06.072 "data_size": 63488 01:31:06.072 }, 01:31:06.072 { 01:31:06.072 "name": "BaseBdev4", 01:31:06.072 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:06.072 "is_configured": true, 01:31:06.072 "data_offset": 2048, 01:31:06.072 "data_size": 63488 01:31:06.072 } 01:31:06.072 ] 01:31:06.072 }' 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:06.072 05:25:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.640 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:06.640 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:31:06.640 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.641 [2024-12-09 05:25:58.118317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:31:06.641 BaseBdev1 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.641 [ 01:31:06.641 { 01:31:06.641 "name": "BaseBdev1", 01:31:06.641 "aliases": [ 01:31:06.641 "3a1c842a-64b3-4576-a77f-c0598c72a99f" 01:31:06.641 ], 01:31:06.641 "product_name": "Malloc disk", 01:31:06.641 "block_size": 512, 01:31:06.641 "num_blocks": 65536, 01:31:06.641 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:06.641 "assigned_rate_limits": { 01:31:06.641 "rw_ios_per_sec": 0, 01:31:06.641 "rw_mbytes_per_sec": 0, 01:31:06.641 "r_mbytes_per_sec": 0, 01:31:06.641 "w_mbytes_per_sec": 0 01:31:06.641 }, 01:31:06.641 "claimed": true, 01:31:06.641 "claim_type": "exclusive_write", 01:31:06.641 "zoned": false, 01:31:06.641 "supported_io_types": { 01:31:06.641 "read": true, 01:31:06.641 "write": true, 01:31:06.641 "unmap": true, 01:31:06.641 "flush": true, 01:31:06.641 "reset": true, 01:31:06.641 "nvme_admin": false, 01:31:06.641 "nvme_io": false, 01:31:06.641 "nvme_io_md": false, 01:31:06.641 "write_zeroes": true, 01:31:06.641 "zcopy": true, 01:31:06.641 "get_zone_info": false, 01:31:06.641 "zone_management": false, 01:31:06.641 "zone_append": false, 01:31:06.641 "compare": false, 01:31:06.641 "compare_and_write": false, 01:31:06.641 "abort": true, 01:31:06.641 "seek_hole": false, 01:31:06.641 "seek_data": false, 01:31:06.641 "copy": true, 01:31:06.641 "nvme_iov_md": false 01:31:06.641 }, 01:31:06.641 "memory_domains": [ 01:31:06.641 { 01:31:06.641 "dma_device_id": "system", 01:31:06.641 "dma_device_type": 1 01:31:06.641 }, 01:31:06.641 { 01:31:06.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:06.641 "dma_device_type": 2 01:31:06.641 } 01:31:06.641 ], 01:31:06.641 "driver_specific": {} 01:31:06.641 } 01:31:06.641 ] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:06.641 "name": "Existed_Raid", 01:31:06.641 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:06.641 "strip_size_kb": 64, 01:31:06.641 "state": "configuring", 01:31:06.641 "raid_level": "raid5f", 01:31:06.641 "superblock": true, 01:31:06.641 "num_base_bdevs": 4, 01:31:06.641 "num_base_bdevs_discovered": 3, 01:31:06.641 "num_base_bdevs_operational": 4, 01:31:06.641 "base_bdevs_list": [ 01:31:06.641 { 01:31:06.641 "name": "BaseBdev1", 01:31:06.641 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:06.641 "is_configured": true, 01:31:06.641 "data_offset": 2048, 01:31:06.641 "data_size": 63488 01:31:06.641 }, 01:31:06.641 { 01:31:06.641 "name": null, 01:31:06.641 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:06.641 "is_configured": false, 01:31:06.641 "data_offset": 0, 01:31:06.641 "data_size": 63488 01:31:06.641 }, 01:31:06.641 { 01:31:06.641 "name": "BaseBdev3", 01:31:06.641 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:06.641 "is_configured": true, 01:31:06.641 "data_offset": 2048, 01:31:06.641 "data_size": 63488 01:31:06.641 }, 01:31:06.641 { 01:31:06.641 "name": "BaseBdev4", 01:31:06.641 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:06.641 "is_configured": true, 01:31:06.641 "data_offset": 2048, 01:31:06.641 "data_size": 63488 01:31:06.641 } 01:31:06.641 ] 01:31:06.641 }' 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:06.641 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.208 [2024-12-09 05:25:58.718628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:07.208 "name": "Existed_Raid", 01:31:07.208 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:07.208 "strip_size_kb": 64, 01:31:07.208 "state": "configuring", 01:31:07.208 "raid_level": "raid5f", 01:31:07.208 "superblock": true, 01:31:07.208 "num_base_bdevs": 4, 01:31:07.208 "num_base_bdevs_discovered": 2, 01:31:07.208 "num_base_bdevs_operational": 4, 01:31:07.208 "base_bdevs_list": [ 01:31:07.208 { 01:31:07.208 "name": "BaseBdev1", 01:31:07.208 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:07.208 "is_configured": true, 01:31:07.208 "data_offset": 2048, 01:31:07.208 "data_size": 63488 01:31:07.208 }, 01:31:07.208 { 01:31:07.208 "name": null, 01:31:07.208 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:07.208 "is_configured": false, 01:31:07.208 "data_offset": 0, 01:31:07.208 "data_size": 63488 01:31:07.208 }, 01:31:07.208 { 01:31:07.208 "name": null, 01:31:07.208 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:07.208 "is_configured": false, 01:31:07.208 "data_offset": 0, 01:31:07.208 "data_size": 63488 01:31:07.208 }, 01:31:07.208 { 01:31:07.208 "name": "BaseBdev4", 01:31:07.208 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:07.208 "is_configured": true, 01:31:07.208 "data_offset": 2048, 01:31:07.208 "data_size": 63488 01:31:07.208 } 01:31:07.208 ] 01:31:07.208 }' 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:07.208 05:25:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.775 [2024-12-09 05:25:59.310811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:07.775 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:07.775 "name": "Existed_Raid", 01:31:07.775 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:07.775 "strip_size_kb": 64, 01:31:07.775 "state": "configuring", 01:31:07.775 "raid_level": "raid5f", 01:31:07.775 "superblock": true, 01:31:07.775 "num_base_bdevs": 4, 01:31:07.775 "num_base_bdevs_discovered": 3, 01:31:07.775 "num_base_bdevs_operational": 4, 01:31:07.775 "base_bdevs_list": [ 01:31:07.775 { 01:31:07.775 "name": "BaseBdev1", 01:31:07.775 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:07.775 "is_configured": true, 01:31:07.775 "data_offset": 2048, 01:31:07.775 "data_size": 63488 01:31:07.775 }, 01:31:07.775 { 01:31:07.775 "name": null, 01:31:07.775 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:07.775 "is_configured": false, 01:31:07.775 "data_offset": 0, 01:31:07.775 "data_size": 63488 01:31:07.775 }, 01:31:07.775 { 01:31:07.775 "name": "BaseBdev3", 01:31:07.775 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:07.776 "is_configured": true, 01:31:07.776 "data_offset": 2048, 01:31:07.776 "data_size": 63488 01:31:07.776 }, 01:31:07.776 { 01:31:07.776 "name": "BaseBdev4", 01:31:07.776 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:07.776 "is_configured": true, 01:31:07.776 "data_offset": 2048, 01:31:07.776 "data_size": 63488 01:31:07.776 } 01:31:07.776 ] 01:31:07.776 }' 01:31:07.776 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:07.776 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:08.366 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:08.366 [2024-12-09 05:25:59.899056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:08.623 05:25:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:08.623 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:08.623 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:08.623 "name": "Existed_Raid", 01:31:08.623 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:08.623 "strip_size_kb": 64, 01:31:08.623 "state": "configuring", 01:31:08.623 "raid_level": "raid5f", 01:31:08.623 "superblock": true, 01:31:08.623 "num_base_bdevs": 4, 01:31:08.623 "num_base_bdevs_discovered": 2, 01:31:08.623 "num_base_bdevs_operational": 4, 01:31:08.623 "base_bdevs_list": [ 01:31:08.623 { 01:31:08.623 "name": null, 01:31:08.623 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:08.623 "is_configured": false, 01:31:08.623 "data_offset": 0, 01:31:08.623 "data_size": 63488 01:31:08.623 }, 01:31:08.623 { 01:31:08.623 "name": null, 01:31:08.623 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:08.623 "is_configured": false, 01:31:08.623 "data_offset": 0, 01:31:08.623 "data_size": 63488 01:31:08.623 }, 01:31:08.623 { 01:31:08.623 "name": "BaseBdev3", 01:31:08.623 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:08.623 "is_configured": true, 01:31:08.623 "data_offset": 2048, 01:31:08.623 "data_size": 63488 01:31:08.623 }, 01:31:08.623 { 01:31:08.623 "name": "BaseBdev4", 01:31:08.623 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:08.623 "is_configured": true, 01:31:08.623 "data_offset": 2048, 01:31:08.623 "data_size": 63488 01:31:08.623 } 01:31:08.623 ] 01:31:08.623 }' 01:31:08.623 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:08.623 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.189 [2024-12-09 05:26:00.599672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:09.189 "name": "Existed_Raid", 01:31:09.189 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:09.189 "strip_size_kb": 64, 01:31:09.189 "state": "configuring", 01:31:09.189 "raid_level": "raid5f", 01:31:09.189 "superblock": true, 01:31:09.189 "num_base_bdevs": 4, 01:31:09.189 "num_base_bdevs_discovered": 3, 01:31:09.189 "num_base_bdevs_operational": 4, 01:31:09.189 "base_bdevs_list": [ 01:31:09.189 { 01:31:09.189 "name": null, 01:31:09.189 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:09.189 "is_configured": false, 01:31:09.189 "data_offset": 0, 01:31:09.189 "data_size": 63488 01:31:09.189 }, 01:31:09.189 { 01:31:09.189 "name": "BaseBdev2", 01:31:09.189 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:09.189 "is_configured": true, 01:31:09.189 "data_offset": 2048, 01:31:09.189 "data_size": 63488 01:31:09.189 }, 01:31:09.189 { 01:31:09.189 "name": "BaseBdev3", 01:31:09.189 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:09.189 "is_configured": true, 01:31:09.189 "data_offset": 2048, 01:31:09.189 "data_size": 63488 01:31:09.189 }, 01:31:09.189 { 01:31:09.189 "name": "BaseBdev4", 01:31:09.189 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:09.189 "is_configured": true, 01:31:09.189 "data_offset": 2048, 01:31:09.189 "data_size": 63488 01:31:09.189 } 01:31:09.189 ] 01:31:09.189 }' 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:09.189 05:26:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3a1c842a-64b3-4576-a77f-c0598c72a99f 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.755 [2024-12-09 05:26:01.314065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 01:31:09.755 [2024-12-09 05:26:01.314753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:31:09.755 [2024-12-09 05:26:01.314794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:09.755 NewBaseBdev 01:31:09.755 [2024-12-09 05:26:01.315499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.755 [2024-12-09 05:26:01.326656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:31:09.755 [2024-12-09 05:26:01.326721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 01:31:09.755 [2024-12-09 05:26:01.327113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:09.755 [ 01:31:09.755 { 01:31:09.755 "name": "NewBaseBdev", 01:31:09.755 "aliases": [ 01:31:09.755 "3a1c842a-64b3-4576-a77f-c0598c72a99f" 01:31:09.755 ], 01:31:09.755 "product_name": "Malloc disk", 01:31:09.755 "block_size": 512, 01:31:09.755 "num_blocks": 65536, 01:31:09.755 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:09.755 "assigned_rate_limits": { 01:31:09.755 "rw_ios_per_sec": 0, 01:31:09.755 "rw_mbytes_per_sec": 0, 01:31:09.755 "r_mbytes_per_sec": 0, 01:31:09.755 "w_mbytes_per_sec": 0 01:31:09.755 }, 01:31:09.755 "claimed": true, 01:31:09.755 "claim_type": "exclusive_write", 01:31:09.755 "zoned": false, 01:31:09.755 "supported_io_types": { 01:31:09.755 "read": true, 01:31:09.755 "write": true, 01:31:09.755 "unmap": true, 01:31:09.755 "flush": true, 01:31:09.755 "reset": true, 01:31:09.755 "nvme_admin": false, 01:31:09.755 "nvme_io": false, 01:31:09.755 "nvme_io_md": false, 01:31:09.755 "write_zeroes": true, 01:31:09.755 "zcopy": true, 01:31:09.755 "get_zone_info": false, 01:31:09.755 "zone_management": false, 01:31:09.755 "zone_append": false, 01:31:09.755 "compare": false, 01:31:09.755 "compare_and_write": false, 01:31:09.755 "abort": true, 01:31:09.755 "seek_hole": false, 01:31:09.755 "seek_data": false, 01:31:09.755 "copy": true, 01:31:09.755 "nvme_iov_md": false 01:31:09.755 }, 01:31:09.755 "memory_domains": [ 01:31:09.755 { 01:31:09.755 "dma_device_id": "system", 01:31:09.755 "dma_device_type": 1 01:31:09.755 }, 01:31:09.755 { 01:31:09.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:31:09.755 "dma_device_type": 2 01:31:09.755 } 01:31:09.755 ], 01:31:09.755 "driver_specific": {} 01:31:09.755 } 01:31:09.755 ] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:09.755 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.013 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.013 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:10.013 "name": "Existed_Raid", 01:31:10.013 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:10.013 "strip_size_kb": 64, 01:31:10.013 "state": "online", 01:31:10.013 "raid_level": "raid5f", 01:31:10.013 "superblock": true, 01:31:10.013 "num_base_bdevs": 4, 01:31:10.013 "num_base_bdevs_discovered": 4, 01:31:10.013 "num_base_bdevs_operational": 4, 01:31:10.013 "base_bdevs_list": [ 01:31:10.013 { 01:31:10.013 "name": "NewBaseBdev", 01:31:10.013 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:10.013 "is_configured": true, 01:31:10.013 "data_offset": 2048, 01:31:10.013 "data_size": 63488 01:31:10.013 }, 01:31:10.013 { 01:31:10.013 "name": "BaseBdev2", 01:31:10.013 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:10.013 "is_configured": true, 01:31:10.013 "data_offset": 2048, 01:31:10.013 "data_size": 63488 01:31:10.013 }, 01:31:10.013 { 01:31:10.013 "name": "BaseBdev3", 01:31:10.013 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:10.013 "is_configured": true, 01:31:10.013 "data_offset": 2048, 01:31:10.013 "data_size": 63488 01:31:10.013 }, 01:31:10.013 { 01:31:10.013 "name": "BaseBdev4", 01:31:10.013 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:10.013 "is_configured": true, 01:31:10.013 "data_offset": 2048, 01:31:10.013 "data_size": 63488 01:31:10.013 } 01:31:10.013 ] 01:31:10.013 }' 01:31:10.013 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:10.013 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.270 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 01:31:10.270 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:31:10.270 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:31:10.270 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:31:10.270 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 01:31:10.270 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.528 [2024-12-09 05:26:01.893231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:31:10.528 "name": "Existed_Raid", 01:31:10.528 "aliases": [ 01:31:10.528 "c1e5a767-107d-4dc6-a071-181a294a81d4" 01:31:10.528 ], 01:31:10.528 "product_name": "Raid Volume", 01:31:10.528 "block_size": 512, 01:31:10.528 "num_blocks": 190464, 01:31:10.528 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:10.528 "assigned_rate_limits": { 01:31:10.528 "rw_ios_per_sec": 0, 01:31:10.528 "rw_mbytes_per_sec": 0, 01:31:10.528 "r_mbytes_per_sec": 0, 01:31:10.528 "w_mbytes_per_sec": 0 01:31:10.528 }, 01:31:10.528 "claimed": false, 01:31:10.528 "zoned": false, 01:31:10.528 "supported_io_types": { 01:31:10.528 "read": true, 01:31:10.528 "write": true, 01:31:10.528 "unmap": false, 01:31:10.528 "flush": false, 01:31:10.528 "reset": true, 01:31:10.528 "nvme_admin": false, 01:31:10.528 "nvme_io": false, 01:31:10.528 "nvme_io_md": false, 01:31:10.528 "write_zeroes": true, 01:31:10.528 "zcopy": false, 01:31:10.528 "get_zone_info": false, 01:31:10.528 "zone_management": false, 01:31:10.528 "zone_append": false, 01:31:10.528 "compare": false, 01:31:10.528 "compare_and_write": false, 01:31:10.528 "abort": false, 01:31:10.528 "seek_hole": false, 01:31:10.528 "seek_data": false, 01:31:10.528 "copy": false, 01:31:10.528 "nvme_iov_md": false 01:31:10.528 }, 01:31:10.528 "driver_specific": { 01:31:10.528 "raid": { 01:31:10.528 "uuid": "c1e5a767-107d-4dc6-a071-181a294a81d4", 01:31:10.528 "strip_size_kb": 64, 01:31:10.528 "state": "online", 01:31:10.528 "raid_level": "raid5f", 01:31:10.528 "superblock": true, 01:31:10.528 "num_base_bdevs": 4, 01:31:10.528 "num_base_bdevs_discovered": 4, 01:31:10.528 "num_base_bdevs_operational": 4, 01:31:10.528 "base_bdevs_list": [ 01:31:10.528 { 01:31:10.528 "name": "NewBaseBdev", 01:31:10.528 "uuid": "3a1c842a-64b3-4576-a77f-c0598c72a99f", 01:31:10.528 "is_configured": true, 01:31:10.528 "data_offset": 2048, 01:31:10.528 "data_size": 63488 01:31:10.528 }, 01:31:10.528 { 01:31:10.528 "name": "BaseBdev2", 01:31:10.528 "uuid": "bee11843-45ea-4f95-bcb0-ebd9630b5545", 01:31:10.528 "is_configured": true, 01:31:10.528 "data_offset": 2048, 01:31:10.528 "data_size": 63488 01:31:10.528 }, 01:31:10.528 { 01:31:10.528 "name": "BaseBdev3", 01:31:10.528 "uuid": "91c8282d-ddce-4688-95dd-171821ae4097", 01:31:10.528 "is_configured": true, 01:31:10.528 "data_offset": 2048, 01:31:10.528 "data_size": 63488 01:31:10.528 }, 01:31:10.528 { 01:31:10.528 "name": "BaseBdev4", 01:31:10.528 "uuid": "ee9289ee-7683-4edb-820d-257bc8ee0880", 01:31:10.528 "is_configured": true, 01:31:10.528 "data_offset": 2048, 01:31:10.528 "data_size": 63488 01:31:10.528 } 01:31:10.528 ] 01:31:10.528 } 01:31:10.528 } 01:31:10.528 }' 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 01:31:10.528 BaseBdev2 01:31:10.528 BaseBdev3 01:31:10.528 BaseBdev4' 01:31:10.528 05:26:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:31:10.528 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:10.529 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.529 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:10.787 [2024-12-09 05:26:02.272877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:31:10.787 [2024-12-09 05:26:02.273081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:10.787 [2024-12-09 05:26:02.273195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:10.787 [2024-12-09 05:26:02.273655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:10.787 [2024-12-09 05:26:02.273675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83776 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83776 ']' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83776 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83776 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83776' 01:31:10.787 killing process with pid 83776 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83776 01:31:10.787 [2024-12-09 05:26:02.315413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:31:10.787 05:26:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83776 01:31:11.045 [2024-12-09 05:26:02.646415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:31:12.419 05:26:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 01:31:12.419 ************************************ 01:31:12.419 END TEST raid5f_state_function_test_sb 01:31:12.419 ************************************ 01:31:12.419 01:31:12.419 real 0m13.129s 01:31:12.419 user 0m21.809s 01:31:12.419 sys 0m1.816s 01:31:12.419 05:26:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:12.419 05:26:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:12.419 05:26:03 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 01:31:12.419 05:26:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:31:12.419 05:26:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:12.419 05:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:31:12.419 ************************************ 01:31:12.419 START TEST raid5f_superblock_test 01:31:12.419 ************************************ 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 01:31:12.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84458 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84458 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84458 ']' 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:12.419 05:26:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:12.419 [2024-12-09 05:26:04.015901] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:12.419 [2024-12-09 05:26:04.016096] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84458 ] 01:31:12.677 [2024-12-09 05:26:04.198973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:12.936 [2024-12-09 05:26:04.341539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:13.196 [2024-12-09 05:26:04.563004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:13.196 [2024-12-09 05:26:04.563093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.455 malloc1 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.455 [2024-12-09 05:26:05.060257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:31:13.455 [2024-12-09 05:26:05.060349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:13.455 [2024-12-09 05:26:05.060433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:31:13.455 [2024-12-09 05:26:05.060451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:13.455 [2024-12-09 05:26:05.063447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:13.455 [2024-12-09 05:26:05.063489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:31:13.455 pt1 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.455 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.715 malloc2 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.715 [2024-12-09 05:26:05.114476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:31:13.715 [2024-12-09 05:26:05.114553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:13.715 [2024-12-09 05:26:05.114589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:31:13.715 [2024-12-09 05:26:05.114619] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:13.715 [2024-12-09 05:26:05.117702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:13.715 [2024-12-09 05:26:05.118016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:31:13.715 pt2 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.715 malloc3 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.715 [2024-12-09 05:26:05.184492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:31:13.715 [2024-12-09 05:26:05.184566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:13.715 [2024-12-09 05:26:05.184615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:31:13.715 [2024-12-09 05:26:05.184631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:13.715 [2024-12-09 05:26:05.187437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:13.715 [2024-12-09 05:26:05.187478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:31:13.715 pt3 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.715 malloc4 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.715 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.716 [2024-12-09 05:26:05.236451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:31:13.716 [2024-12-09 05:26:05.236535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:13.716 [2024-12-09 05:26:05.236567] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:31:13.716 [2024-12-09 05:26:05.236582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:13.716 [2024-12-09 05:26:05.239650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:13.716 [2024-12-09 05:26:05.239964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:31:13.716 pt4 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.716 [2024-12-09 05:26:05.244639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:31:13.716 [2024-12-09 05:26:05.247142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:31:13.716 [2024-12-09 05:26:05.247449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:31:13.716 [2024-12-09 05:26:05.247526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:31:13.716 [2024-12-09 05:26:05.247781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:31:13.716 [2024-12-09 05:26:05.247803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:13.716 [2024-12-09 05:26:05.248102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:31:13.716 [2024-12-09 05:26:05.254242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:31:13.716 [2024-12-09 05:26:05.254422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:31:13.716 [2024-12-09 05:26:05.254665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:13.716 "name": "raid_bdev1", 01:31:13.716 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:13.716 "strip_size_kb": 64, 01:31:13.716 "state": "online", 01:31:13.716 "raid_level": "raid5f", 01:31:13.716 "superblock": true, 01:31:13.716 "num_base_bdevs": 4, 01:31:13.716 "num_base_bdevs_discovered": 4, 01:31:13.716 "num_base_bdevs_operational": 4, 01:31:13.716 "base_bdevs_list": [ 01:31:13.716 { 01:31:13.716 "name": "pt1", 01:31:13.716 "uuid": "00000000-0000-0000-0000-000000000001", 01:31:13.716 "is_configured": true, 01:31:13.716 "data_offset": 2048, 01:31:13.716 "data_size": 63488 01:31:13.716 }, 01:31:13.716 { 01:31:13.716 "name": "pt2", 01:31:13.716 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:13.716 "is_configured": true, 01:31:13.716 "data_offset": 2048, 01:31:13.716 "data_size": 63488 01:31:13.716 }, 01:31:13.716 { 01:31:13.716 "name": "pt3", 01:31:13.716 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:13.716 "is_configured": true, 01:31:13.716 "data_offset": 2048, 01:31:13.716 "data_size": 63488 01:31:13.716 }, 01:31:13.716 { 01:31:13.716 "name": "pt4", 01:31:13.716 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:13.716 "is_configured": true, 01:31:13.716 "data_offset": 2048, 01:31:13.716 "data_size": 63488 01:31:13.716 } 01:31:13.716 ] 01:31:13.716 }' 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:13.716 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.282 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:31:14.282 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:31:14.282 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:31:14.282 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:31:14.282 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.283 [2024-12-09 05:26:05.783176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:31:14.283 "name": "raid_bdev1", 01:31:14.283 "aliases": [ 01:31:14.283 "7379429c-05ce-4d8a-b8b0-73913b2cad09" 01:31:14.283 ], 01:31:14.283 "product_name": "Raid Volume", 01:31:14.283 "block_size": 512, 01:31:14.283 "num_blocks": 190464, 01:31:14.283 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:14.283 "assigned_rate_limits": { 01:31:14.283 "rw_ios_per_sec": 0, 01:31:14.283 "rw_mbytes_per_sec": 0, 01:31:14.283 "r_mbytes_per_sec": 0, 01:31:14.283 "w_mbytes_per_sec": 0 01:31:14.283 }, 01:31:14.283 "claimed": false, 01:31:14.283 "zoned": false, 01:31:14.283 "supported_io_types": { 01:31:14.283 "read": true, 01:31:14.283 "write": true, 01:31:14.283 "unmap": false, 01:31:14.283 "flush": false, 01:31:14.283 "reset": true, 01:31:14.283 "nvme_admin": false, 01:31:14.283 "nvme_io": false, 01:31:14.283 "nvme_io_md": false, 01:31:14.283 "write_zeroes": true, 01:31:14.283 "zcopy": false, 01:31:14.283 "get_zone_info": false, 01:31:14.283 "zone_management": false, 01:31:14.283 "zone_append": false, 01:31:14.283 "compare": false, 01:31:14.283 "compare_and_write": false, 01:31:14.283 "abort": false, 01:31:14.283 "seek_hole": false, 01:31:14.283 "seek_data": false, 01:31:14.283 "copy": false, 01:31:14.283 "nvme_iov_md": false 01:31:14.283 }, 01:31:14.283 "driver_specific": { 01:31:14.283 "raid": { 01:31:14.283 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:14.283 "strip_size_kb": 64, 01:31:14.283 "state": "online", 01:31:14.283 "raid_level": "raid5f", 01:31:14.283 "superblock": true, 01:31:14.283 "num_base_bdevs": 4, 01:31:14.283 "num_base_bdevs_discovered": 4, 01:31:14.283 "num_base_bdevs_operational": 4, 01:31:14.283 "base_bdevs_list": [ 01:31:14.283 { 01:31:14.283 "name": "pt1", 01:31:14.283 "uuid": "00000000-0000-0000-0000-000000000001", 01:31:14.283 "is_configured": true, 01:31:14.283 "data_offset": 2048, 01:31:14.283 "data_size": 63488 01:31:14.283 }, 01:31:14.283 { 01:31:14.283 "name": "pt2", 01:31:14.283 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:14.283 "is_configured": true, 01:31:14.283 "data_offset": 2048, 01:31:14.283 "data_size": 63488 01:31:14.283 }, 01:31:14.283 { 01:31:14.283 "name": "pt3", 01:31:14.283 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:14.283 "is_configured": true, 01:31:14.283 "data_offset": 2048, 01:31:14.283 "data_size": 63488 01:31:14.283 }, 01:31:14.283 { 01:31:14.283 "name": "pt4", 01:31:14.283 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:14.283 "is_configured": true, 01:31:14.283 "data_offset": 2048, 01:31:14.283 "data_size": 63488 01:31:14.283 } 01:31:14.283 ] 01:31:14.283 } 01:31:14.283 } 01:31:14.283 }' 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:31:14.283 pt2 01:31:14.283 pt3 01:31:14.283 pt4' 01:31:14.283 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:14.541 05:26:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.541 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:31:14.800 [2024-12-09 05:26:06.175233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7379429c-05ce-4d8a-b8b0-73913b2cad09 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7379429c-05ce-4d8a-b8b0-73913b2cad09 ']' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 [2024-12-09 05:26:06.227063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:14.800 [2024-12-09 05:26:06.227101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:14.800 [2024-12-09 05:26:06.227206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:14.800 [2024-12-09 05:26:06.227332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:14.800 [2024-12-09 05:26:06.227379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 [2024-12-09 05:26:06.371303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:31:14.800 [2024-12-09 05:26:06.375492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:31:14.800 [2024-12-09 05:26:06.375588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 01:31:14.800 [2024-12-09 05:26:06.375672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 01:31:14.800 [2024-12-09 05:26:06.375774] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:31:14.800 [2024-12-09 05:26:06.375869] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:31:14.800 [2024-12-09 05:26:06.375919] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 01:31:14.800 [2024-12-09 05:26:06.375968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 01:31:14.800 [2024-12-09 05:26:06.376009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:14.800 [2024-12-09 05:26:06.376032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:31:14.800 request: 01:31:14.800 { 01:31:14.800 "name": "raid_bdev1", 01:31:14.800 "raid_level": "raid5f", 01:31:14.800 "base_bdevs": [ 01:31:14.800 "malloc1", 01:31:14.800 "malloc2", 01:31:14.800 "malloc3", 01:31:14.800 "malloc4" 01:31:14.800 ], 01:31:14.800 "strip_size_kb": 64, 01:31:14.800 "superblock": false, 01:31:14.800 "method": "bdev_raid_create", 01:31:14.800 "req_id": 1 01:31:14.800 } 01:31:14.800 Got JSON-RPC error response 01:31:14.800 response: 01:31:14.800 { 01:31:14.800 "code": -17, 01:31:14.800 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:31:14.800 } 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:14.800 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:15.059 [2024-12-09 05:26:06.443910] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:31:15.059 [2024-12-09 05:26:06.444115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:15.059 [2024-12-09 05:26:06.444184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:31:15.059 [2024-12-09 05:26:06.444423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:15.059 [2024-12-09 05:26:06.447440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:15.059 [2024-12-09 05:26:06.447611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:31:15.059 [2024-12-09 05:26:06.447814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:31:15.059 [2024-12-09 05:26:06.447983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:31:15.059 pt1 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:15.059 "name": "raid_bdev1", 01:31:15.059 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:15.059 "strip_size_kb": 64, 01:31:15.059 "state": "configuring", 01:31:15.059 "raid_level": "raid5f", 01:31:15.059 "superblock": true, 01:31:15.059 "num_base_bdevs": 4, 01:31:15.059 "num_base_bdevs_discovered": 1, 01:31:15.059 "num_base_bdevs_operational": 4, 01:31:15.059 "base_bdevs_list": [ 01:31:15.059 { 01:31:15.059 "name": "pt1", 01:31:15.059 "uuid": "00000000-0000-0000-0000-000000000001", 01:31:15.059 "is_configured": true, 01:31:15.059 "data_offset": 2048, 01:31:15.059 "data_size": 63488 01:31:15.059 }, 01:31:15.059 { 01:31:15.059 "name": null, 01:31:15.059 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:15.059 "is_configured": false, 01:31:15.059 "data_offset": 2048, 01:31:15.059 "data_size": 63488 01:31:15.059 }, 01:31:15.059 { 01:31:15.059 "name": null, 01:31:15.059 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:15.059 "is_configured": false, 01:31:15.059 "data_offset": 2048, 01:31:15.059 "data_size": 63488 01:31:15.059 }, 01:31:15.059 { 01:31:15.059 "name": null, 01:31:15.059 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:15.059 "is_configured": false, 01:31:15.059 "data_offset": 2048, 01:31:15.059 "data_size": 63488 01:31:15.059 } 01:31:15.059 ] 01:31:15.059 }' 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:15.059 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:15.631 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 01:31:15.631 05:26:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:31:15.631 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:15.631 05:26:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:15.631 [2024-12-09 05:26:06.996633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:31:15.631 [2024-12-09 05:26:06.996751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:15.631 [2024-12-09 05:26:06.996809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:31:15.631 [2024-12-09 05:26:06.996826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:15.631 [2024-12-09 05:26:06.997488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:15.631 [2024-12-09 05:26:06.997533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:31:15.631 [2024-12-09 05:26:06.997687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:31:15.631 [2024-12-09 05:26:06.997735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:31:15.631 pt2 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:15.631 [2024-12-09 05:26:07.004580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:15.631 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:15.632 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:15.632 "name": "raid_bdev1", 01:31:15.632 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:15.632 "strip_size_kb": 64, 01:31:15.632 "state": "configuring", 01:31:15.632 "raid_level": "raid5f", 01:31:15.632 "superblock": true, 01:31:15.632 "num_base_bdevs": 4, 01:31:15.632 "num_base_bdevs_discovered": 1, 01:31:15.632 "num_base_bdevs_operational": 4, 01:31:15.632 "base_bdevs_list": [ 01:31:15.632 { 01:31:15.632 "name": "pt1", 01:31:15.632 "uuid": "00000000-0000-0000-0000-000000000001", 01:31:15.632 "is_configured": true, 01:31:15.632 "data_offset": 2048, 01:31:15.632 "data_size": 63488 01:31:15.632 }, 01:31:15.632 { 01:31:15.632 "name": null, 01:31:15.632 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:15.632 "is_configured": false, 01:31:15.632 "data_offset": 0, 01:31:15.632 "data_size": 63488 01:31:15.632 }, 01:31:15.632 { 01:31:15.632 "name": null, 01:31:15.632 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:15.632 "is_configured": false, 01:31:15.632 "data_offset": 2048, 01:31:15.632 "data_size": 63488 01:31:15.632 }, 01:31:15.632 { 01:31:15.632 "name": null, 01:31:15.632 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:15.632 "is_configured": false, 01:31:15.632 "data_offset": 2048, 01:31:15.632 "data_size": 63488 01:31:15.632 } 01:31:15.632 ] 01:31:15.632 }' 01:31:15.632 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:15.632 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.200 [2024-12-09 05:26:07.540839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:31:16.200 [2024-12-09 05:26:07.541128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:16.200 [2024-12-09 05:26:07.541170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:31:16.200 [2024-12-09 05:26:07.541186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:16.200 [2024-12-09 05:26:07.541849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:16.200 [2024-12-09 05:26:07.541876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:31:16.200 [2024-12-09 05:26:07.541995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:31:16.200 [2024-12-09 05:26:07.542028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:31:16.200 pt2 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.200 [2024-12-09 05:26:07.552760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:31:16.200 [2024-12-09 05:26:07.552819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:16.200 [2024-12-09 05:26:07.552856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:31:16.200 [2024-12-09 05:26:07.552874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:16.200 [2024-12-09 05:26:07.553341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:16.200 [2024-12-09 05:26:07.553386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:31:16.200 [2024-12-09 05:26:07.553475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:31:16.200 [2024-12-09 05:26:07.553512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:31:16.200 pt3 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.200 [2024-12-09 05:26:07.564766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:31:16.200 [2024-12-09 05:26:07.564821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:16.200 [2024-12-09 05:26:07.564849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:31:16.200 [2024-12-09 05:26:07.564863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:16.200 [2024-12-09 05:26:07.565369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:16.200 [2024-12-09 05:26:07.565435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:31:16.200 [2024-12-09 05:26:07.565519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:31:16.200 [2024-12-09 05:26:07.565567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:31:16.200 [2024-12-09 05:26:07.565745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:31:16.200 [2024-12-09 05:26:07.565781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:16.200 [2024-12-09 05:26:07.566091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:31:16.200 [2024-12-09 05:26:07.572666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:31:16.200 [2024-12-09 05:26:07.572698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:31:16.200 [2024-12-09 05:26:07.572917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:16.200 pt4 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:16.200 "name": "raid_bdev1", 01:31:16.200 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:16.200 "strip_size_kb": 64, 01:31:16.200 "state": "online", 01:31:16.200 "raid_level": "raid5f", 01:31:16.200 "superblock": true, 01:31:16.200 "num_base_bdevs": 4, 01:31:16.200 "num_base_bdevs_discovered": 4, 01:31:16.200 "num_base_bdevs_operational": 4, 01:31:16.200 "base_bdevs_list": [ 01:31:16.200 { 01:31:16.200 "name": "pt1", 01:31:16.200 "uuid": "00000000-0000-0000-0000-000000000001", 01:31:16.200 "is_configured": true, 01:31:16.200 "data_offset": 2048, 01:31:16.200 "data_size": 63488 01:31:16.200 }, 01:31:16.200 { 01:31:16.200 "name": "pt2", 01:31:16.200 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:16.200 "is_configured": true, 01:31:16.200 "data_offset": 2048, 01:31:16.200 "data_size": 63488 01:31:16.200 }, 01:31:16.200 { 01:31:16.200 "name": "pt3", 01:31:16.200 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:16.200 "is_configured": true, 01:31:16.200 "data_offset": 2048, 01:31:16.200 "data_size": 63488 01:31:16.200 }, 01:31:16.200 { 01:31:16.200 "name": "pt4", 01:31:16.200 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:16.200 "is_configured": true, 01:31:16.200 "data_offset": 2048, 01:31:16.200 "data_size": 63488 01:31:16.200 } 01:31:16.200 ] 01:31:16.200 }' 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:16.200 05:26:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.768 [2024-12-09 05:26:08.120916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:31:16.768 "name": "raid_bdev1", 01:31:16.768 "aliases": [ 01:31:16.768 "7379429c-05ce-4d8a-b8b0-73913b2cad09" 01:31:16.768 ], 01:31:16.768 "product_name": "Raid Volume", 01:31:16.768 "block_size": 512, 01:31:16.768 "num_blocks": 190464, 01:31:16.768 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:16.768 "assigned_rate_limits": { 01:31:16.768 "rw_ios_per_sec": 0, 01:31:16.768 "rw_mbytes_per_sec": 0, 01:31:16.768 "r_mbytes_per_sec": 0, 01:31:16.768 "w_mbytes_per_sec": 0 01:31:16.768 }, 01:31:16.768 "claimed": false, 01:31:16.768 "zoned": false, 01:31:16.768 "supported_io_types": { 01:31:16.768 "read": true, 01:31:16.768 "write": true, 01:31:16.768 "unmap": false, 01:31:16.768 "flush": false, 01:31:16.768 "reset": true, 01:31:16.768 "nvme_admin": false, 01:31:16.768 "nvme_io": false, 01:31:16.768 "nvme_io_md": false, 01:31:16.768 "write_zeroes": true, 01:31:16.768 "zcopy": false, 01:31:16.768 "get_zone_info": false, 01:31:16.768 "zone_management": false, 01:31:16.768 "zone_append": false, 01:31:16.768 "compare": false, 01:31:16.768 "compare_and_write": false, 01:31:16.768 "abort": false, 01:31:16.768 "seek_hole": false, 01:31:16.768 "seek_data": false, 01:31:16.768 "copy": false, 01:31:16.768 "nvme_iov_md": false 01:31:16.768 }, 01:31:16.768 "driver_specific": { 01:31:16.768 "raid": { 01:31:16.768 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:16.768 "strip_size_kb": 64, 01:31:16.768 "state": "online", 01:31:16.768 "raid_level": "raid5f", 01:31:16.768 "superblock": true, 01:31:16.768 "num_base_bdevs": 4, 01:31:16.768 "num_base_bdevs_discovered": 4, 01:31:16.768 "num_base_bdevs_operational": 4, 01:31:16.768 "base_bdevs_list": [ 01:31:16.768 { 01:31:16.768 "name": "pt1", 01:31:16.768 "uuid": "00000000-0000-0000-0000-000000000001", 01:31:16.768 "is_configured": true, 01:31:16.768 "data_offset": 2048, 01:31:16.768 "data_size": 63488 01:31:16.768 }, 01:31:16.768 { 01:31:16.768 "name": "pt2", 01:31:16.768 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:16.768 "is_configured": true, 01:31:16.768 "data_offset": 2048, 01:31:16.768 "data_size": 63488 01:31:16.768 }, 01:31:16.768 { 01:31:16.768 "name": "pt3", 01:31:16.768 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:16.768 "is_configured": true, 01:31:16.768 "data_offset": 2048, 01:31:16.768 "data_size": 63488 01:31:16.768 }, 01:31:16.768 { 01:31:16.768 "name": "pt4", 01:31:16.768 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:16.768 "is_configured": true, 01:31:16.768 "data_offset": 2048, 01:31:16.768 "data_size": 63488 01:31:16.768 } 01:31:16.768 ] 01:31:16.768 } 01:31:16.768 } 01:31:16.768 }' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:31:16.768 pt2 01:31:16.768 pt3 01:31:16.768 pt4' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:16.768 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:16.769 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:31:17.028 [2024-12-09 05:26:08.516999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.028 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7379429c-05ce-4d8a-b8b0-73913b2cad09 '!=' 7379429c-05ce-4d8a-b8b0-73913b2cad09 ']' 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.029 [2024-12-09 05:26:08.572833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:17.029 "name": "raid_bdev1", 01:31:17.029 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:17.029 "strip_size_kb": 64, 01:31:17.029 "state": "online", 01:31:17.029 "raid_level": "raid5f", 01:31:17.029 "superblock": true, 01:31:17.029 "num_base_bdevs": 4, 01:31:17.029 "num_base_bdevs_discovered": 3, 01:31:17.029 "num_base_bdevs_operational": 3, 01:31:17.029 "base_bdevs_list": [ 01:31:17.029 { 01:31:17.029 "name": null, 01:31:17.029 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:17.029 "is_configured": false, 01:31:17.029 "data_offset": 0, 01:31:17.029 "data_size": 63488 01:31:17.029 }, 01:31:17.029 { 01:31:17.029 "name": "pt2", 01:31:17.029 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:17.029 "is_configured": true, 01:31:17.029 "data_offset": 2048, 01:31:17.029 "data_size": 63488 01:31:17.029 }, 01:31:17.029 { 01:31:17.029 "name": "pt3", 01:31:17.029 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:17.029 "is_configured": true, 01:31:17.029 "data_offset": 2048, 01:31:17.029 "data_size": 63488 01:31:17.029 }, 01:31:17.029 { 01:31:17.029 "name": "pt4", 01:31:17.029 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:17.029 "is_configured": true, 01:31:17.029 "data_offset": 2048, 01:31:17.029 "data_size": 63488 01:31:17.029 } 01:31:17.029 ] 01:31:17.029 }' 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:17.029 05:26:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 [2024-12-09 05:26:09.108976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:17.597 [2024-12-09 05:26:09.109174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:17.597 [2024-12-09 05:26:09.109287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:17.597 [2024-12-09 05:26:09.109445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:17.597 [2024-12-09 05:26:09.109465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.597 [2024-12-09 05:26:09.201018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:31:17.597 [2024-12-09 05:26:09.201084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:17.597 [2024-12-09 05:26:09.201114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 01:31:17.597 [2024-12-09 05:26:09.201128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:17.597 [2024-12-09 05:26:09.204308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:17.597 [2024-12-09 05:26:09.204350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:31:17.597 [2024-12-09 05:26:09.204480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:31:17.597 [2024-12-09 05:26:09.204540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:31:17.597 pt2 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:17.597 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:17.856 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:17.856 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:17.856 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:17.856 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:17.856 "name": "raid_bdev1", 01:31:17.856 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:17.856 "strip_size_kb": 64, 01:31:17.856 "state": "configuring", 01:31:17.856 "raid_level": "raid5f", 01:31:17.856 "superblock": true, 01:31:17.856 "num_base_bdevs": 4, 01:31:17.856 "num_base_bdevs_discovered": 1, 01:31:17.856 "num_base_bdevs_operational": 3, 01:31:17.856 "base_bdevs_list": [ 01:31:17.856 { 01:31:17.856 "name": null, 01:31:17.856 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:17.856 "is_configured": false, 01:31:17.856 "data_offset": 2048, 01:31:17.856 "data_size": 63488 01:31:17.856 }, 01:31:17.856 { 01:31:17.856 "name": "pt2", 01:31:17.856 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:17.856 "is_configured": true, 01:31:17.856 "data_offset": 2048, 01:31:17.856 "data_size": 63488 01:31:17.856 }, 01:31:17.856 { 01:31:17.856 "name": null, 01:31:17.856 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:17.856 "is_configured": false, 01:31:17.856 "data_offset": 2048, 01:31:17.856 "data_size": 63488 01:31:17.856 }, 01:31:17.856 { 01:31:17.856 "name": null, 01:31:17.856 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:17.856 "is_configured": false, 01:31:17.856 "data_offset": 2048, 01:31:17.856 "data_size": 63488 01:31:17.856 } 01:31:17.856 ] 01:31:17.856 }' 01:31:17.856 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:17.856 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:18.114 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 01:31:18.114 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:31:18.114 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 01:31:18.114 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:18.114 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:18.114 [2024-12-09 05:26:09.725214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 01:31:18.114 [2024-12-09 05:26:09.725324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:18.114 [2024-12-09 05:26:09.725382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 01:31:18.114 [2024-12-09 05:26:09.725402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:18.114 [2024-12-09 05:26:09.725998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:18.114 [2024-12-09 05:26:09.726033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 01:31:18.114 [2024-12-09 05:26:09.726161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 01:31:18.114 [2024-12-09 05:26:09.726192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:31:18.373 pt3 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:18.373 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:18.373 "name": "raid_bdev1", 01:31:18.373 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:18.374 "strip_size_kb": 64, 01:31:18.374 "state": "configuring", 01:31:18.374 "raid_level": "raid5f", 01:31:18.374 "superblock": true, 01:31:18.374 "num_base_bdevs": 4, 01:31:18.374 "num_base_bdevs_discovered": 2, 01:31:18.374 "num_base_bdevs_operational": 3, 01:31:18.374 "base_bdevs_list": [ 01:31:18.374 { 01:31:18.374 "name": null, 01:31:18.374 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:18.374 "is_configured": false, 01:31:18.374 "data_offset": 2048, 01:31:18.374 "data_size": 63488 01:31:18.374 }, 01:31:18.374 { 01:31:18.374 "name": "pt2", 01:31:18.374 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:18.374 "is_configured": true, 01:31:18.374 "data_offset": 2048, 01:31:18.374 "data_size": 63488 01:31:18.374 }, 01:31:18.374 { 01:31:18.374 "name": "pt3", 01:31:18.374 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:18.374 "is_configured": true, 01:31:18.374 "data_offset": 2048, 01:31:18.374 "data_size": 63488 01:31:18.374 }, 01:31:18.374 { 01:31:18.374 "name": null, 01:31:18.374 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:18.374 "is_configured": false, 01:31:18.374 "data_offset": 2048, 01:31:18.374 "data_size": 63488 01:31:18.374 } 01:31:18.374 ] 01:31:18.374 }' 01:31:18.374 05:26:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:18.374 05:26:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:18.939 [2024-12-09 05:26:10.277469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:31:18.939 [2024-12-09 05:26:10.277562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:18.939 [2024-12-09 05:26:10.277605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 01:31:18.939 [2024-12-09 05:26:10.277625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:18.939 [2024-12-09 05:26:10.278405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:18.939 [2024-12-09 05:26:10.278444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:31:18.939 [2024-12-09 05:26:10.278572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:31:18.939 [2024-12-09 05:26:10.278620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:31:18.939 [2024-12-09 05:26:10.278827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:31:18.939 [2024-12-09 05:26:10.278854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:18.939 [2024-12-09 05:26:10.279236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:31:18.939 [2024-12-09 05:26:10.286830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:31:18.939 [2024-12-09 05:26:10.287032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:31:18.939 [2024-12-09 05:26:10.287548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:18.939 pt4 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:18.939 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:18.940 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:18.940 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:18.940 "name": "raid_bdev1", 01:31:18.940 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:18.940 "strip_size_kb": 64, 01:31:18.940 "state": "online", 01:31:18.940 "raid_level": "raid5f", 01:31:18.940 "superblock": true, 01:31:18.940 "num_base_bdevs": 4, 01:31:18.940 "num_base_bdevs_discovered": 3, 01:31:18.940 "num_base_bdevs_operational": 3, 01:31:18.940 "base_bdevs_list": [ 01:31:18.940 { 01:31:18.940 "name": null, 01:31:18.940 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:18.940 "is_configured": false, 01:31:18.940 "data_offset": 2048, 01:31:18.940 "data_size": 63488 01:31:18.940 }, 01:31:18.940 { 01:31:18.940 "name": "pt2", 01:31:18.940 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:18.940 "is_configured": true, 01:31:18.940 "data_offset": 2048, 01:31:18.940 "data_size": 63488 01:31:18.940 }, 01:31:18.940 { 01:31:18.940 "name": "pt3", 01:31:18.940 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:18.940 "is_configured": true, 01:31:18.940 "data_offset": 2048, 01:31:18.940 "data_size": 63488 01:31:18.940 }, 01:31:18.940 { 01:31:18.940 "name": "pt4", 01:31:18.940 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:18.940 "is_configured": true, 01:31:18.940 "data_offset": 2048, 01:31:18.940 "data_size": 63488 01:31:18.940 } 01:31:18.940 ] 01:31:18.940 }' 01:31:18.940 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:18.940 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:19.198 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:31:19.198 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:19.198 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:19.457 [2024-12-09 05:26:10.815424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:19.457 [2024-12-09 05:26:10.815464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:19.457 [2024-12-09 05:26:10.815559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:19.457 [2024-12-09 05:26:10.815659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:19.457 [2024-12-09 05:26:10.815680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:19.457 [2024-12-09 05:26:10.887425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:31:19.457 [2024-12-09 05:26:10.887510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:19.457 [2024-12-09 05:26:10.887547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 01:31:19.457 [2024-12-09 05:26:10.887575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:19.457 [2024-12-09 05:26:10.890809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:19.457 [2024-12-09 05:26:10.890855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:31:19.457 [2024-12-09 05:26:10.890971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:31:19.457 [2024-12-09 05:26:10.891033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:31:19.457 [2024-12-09 05:26:10.891199] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:31:19.457 [2024-12-09 05:26:10.891222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:19.457 [2024-12-09 05:26:10.891242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:31:19.457 [2024-12-09 05:26:10.891347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:31:19.457 [2024-12-09 05:26:10.891513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 01:31:19.457 pt1 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:19.457 "name": "raid_bdev1", 01:31:19.457 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:19.457 "strip_size_kb": 64, 01:31:19.457 "state": "configuring", 01:31:19.457 "raid_level": "raid5f", 01:31:19.457 "superblock": true, 01:31:19.457 "num_base_bdevs": 4, 01:31:19.457 "num_base_bdevs_discovered": 2, 01:31:19.457 "num_base_bdevs_operational": 3, 01:31:19.457 "base_bdevs_list": [ 01:31:19.457 { 01:31:19.457 "name": null, 01:31:19.457 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:19.457 "is_configured": false, 01:31:19.457 "data_offset": 2048, 01:31:19.457 "data_size": 63488 01:31:19.457 }, 01:31:19.457 { 01:31:19.457 "name": "pt2", 01:31:19.457 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:19.457 "is_configured": true, 01:31:19.457 "data_offset": 2048, 01:31:19.457 "data_size": 63488 01:31:19.457 }, 01:31:19.457 { 01:31:19.457 "name": "pt3", 01:31:19.457 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:19.457 "is_configured": true, 01:31:19.457 "data_offset": 2048, 01:31:19.457 "data_size": 63488 01:31:19.457 }, 01:31:19.457 { 01:31:19.457 "name": null, 01:31:19.457 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:19.457 "is_configured": false, 01:31:19.457 "data_offset": 2048, 01:31:19.457 "data_size": 63488 01:31:19.457 } 01:31:19.457 ] 01:31:19.457 }' 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:19.457 05:26:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.024 [2024-12-09 05:26:11.456232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 01:31:20.024 [2024-12-09 05:26:11.456326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:20.024 [2024-12-09 05:26:11.456376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 01:31:20.024 [2024-12-09 05:26:11.456431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:20.024 [2024-12-09 05:26:11.457030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:20.024 [2024-12-09 05:26:11.457062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 01:31:20.024 [2024-12-09 05:26:11.457185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 01:31:20.024 [2024-12-09 05:26:11.457246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 01:31:20.024 [2024-12-09 05:26:11.457466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:31:20.024 [2024-12-09 05:26:11.457483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:20.024 [2024-12-09 05:26:11.457848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:31:20.024 [2024-12-09 05:26:11.464568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:31:20.024 [2024-12-09 05:26:11.464598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:31:20.024 [2024-12-09 05:26:11.464969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:20.024 pt4 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:20.024 "name": "raid_bdev1", 01:31:20.024 "uuid": "7379429c-05ce-4d8a-b8b0-73913b2cad09", 01:31:20.024 "strip_size_kb": 64, 01:31:20.024 "state": "online", 01:31:20.024 "raid_level": "raid5f", 01:31:20.024 "superblock": true, 01:31:20.024 "num_base_bdevs": 4, 01:31:20.024 "num_base_bdevs_discovered": 3, 01:31:20.024 "num_base_bdevs_operational": 3, 01:31:20.024 "base_bdevs_list": [ 01:31:20.024 { 01:31:20.024 "name": null, 01:31:20.024 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:20.024 "is_configured": false, 01:31:20.024 "data_offset": 2048, 01:31:20.024 "data_size": 63488 01:31:20.024 }, 01:31:20.024 { 01:31:20.024 "name": "pt2", 01:31:20.024 "uuid": "00000000-0000-0000-0000-000000000002", 01:31:20.024 "is_configured": true, 01:31:20.024 "data_offset": 2048, 01:31:20.024 "data_size": 63488 01:31:20.024 }, 01:31:20.024 { 01:31:20.024 "name": "pt3", 01:31:20.024 "uuid": "00000000-0000-0000-0000-000000000003", 01:31:20.024 "is_configured": true, 01:31:20.024 "data_offset": 2048, 01:31:20.024 "data_size": 63488 01:31:20.024 }, 01:31:20.024 { 01:31:20.024 "name": "pt4", 01:31:20.024 "uuid": "00000000-0000-0000-0000-000000000004", 01:31:20.024 "is_configured": true, 01:31:20.024 "data_offset": 2048, 01:31:20.024 "data_size": 63488 01:31:20.024 } 01:31:20.024 ] 01:31:20.024 }' 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:20.024 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.589 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:31:20.589 05:26:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:31:20.589 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:20.589 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.589 05:26:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:20.589 [2024-12-09 05:26:12.017041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7379429c-05ce-4d8a-b8b0-73913b2cad09 '!=' 7379429c-05ce-4d8a-b8b0-73913b2cad09 ']' 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84458 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84458 ']' 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84458 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84458 01:31:20.589 killing process with pid 84458 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84458' 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84458 01:31:20.589 [2024-12-09 05:26:12.092093] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:31:20.589 05:26:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84458 01:31:20.589 [2024-12-09 05:26:12.092204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:20.589 [2024-12-09 05:26:12.092306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:20.589 [2024-12-09 05:26:12.092329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:31:20.847 [2024-12-09 05:26:12.432452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:31:22.219 05:26:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 01:31:22.219 01:31:22.219 real 0m9.658s 01:31:22.219 user 0m15.758s 01:31:22.219 sys 0m1.445s 01:31:22.219 05:26:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:22.219 ************************************ 01:31:22.219 END TEST raid5f_superblock_test 01:31:22.219 ************************************ 01:31:22.219 05:26:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 01:31:22.219 05:26:13 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 01:31:22.219 05:26:13 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 01:31:22.219 05:26:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:31:22.219 05:26:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:22.219 05:26:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:31:22.219 ************************************ 01:31:22.219 START TEST raid5f_rebuild_test 01:31:22.219 ************************************ 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84955 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84955 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84955 ']' 01:31:22.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:22.219 05:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:22.219 [2024-12-09 05:26:13.744583] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:22.219 I/O size of 3145728 is greater than zero copy threshold (65536). 01:31:22.219 Zero copy mechanism will not be used. 01:31:22.219 [2024-12-09 05:26:13.745127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84955 ] 01:31:22.477 [2024-12-09 05:26:13.937160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:22.736 [2024-12-09 05:26:14.116447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:22.736 [2024-12-09 05:26:14.326905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:22.736 [2024-12-09 05:26:14.326975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 BaseBdev1_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 [2024-12-09 05:26:14.746160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:31:23.316 [2024-12-09 05:26:14.746250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:23.316 [2024-12-09 05:26:14.746280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:31:23.316 [2024-12-09 05:26:14.746296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:23.316 [2024-12-09 05:26:14.749133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:23.316 [2024-12-09 05:26:14.749197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:31:23.316 BaseBdev1 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 BaseBdev2_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 [2024-12-09 05:26:14.797590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:31:23.316 [2024-12-09 05:26:14.797692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:23.316 [2024-12-09 05:26:14.797728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:31:23.316 [2024-12-09 05:26:14.797747] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:23.316 [2024-12-09 05:26:14.800854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:23.316 [2024-12-09 05:26:14.800904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:31:23.316 BaseBdev2 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 BaseBdev3_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 [2024-12-09 05:26:14.860354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:31:23.316 [2024-12-09 05:26:14.860647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:23.316 [2024-12-09 05:26:14.860689] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:31:23.316 [2024-12-09 05:26:14.860709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:23.316 [2024-12-09 05:26:14.863559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:23.316 [2024-12-09 05:26:14.863630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:31:23.316 BaseBdev3 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.316 BaseBdev4_malloc 01:31:23.316 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.317 [2024-12-09 05:26:14.912938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 01:31:23.317 [2024-12-09 05:26:14.913175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:23.317 [2024-12-09 05:26:14.913216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:31:23.317 [2024-12-09 05:26:14.913237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:23.317 [2024-12-09 05:26:14.916037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:23.317 [2024-12-09 05:26:14.916092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:31:23.317 BaseBdev4 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.317 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.575 spare_malloc 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.575 spare_delay 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.575 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.575 [2024-12-09 05:26:14.977390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:31:23.576 [2024-12-09 05:26:14.977539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:23.576 [2024-12-09 05:26:14.977568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:31:23.576 [2024-12-09 05:26:14.977586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:23.576 [2024-12-09 05:26:14.980733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:23.576 [2024-12-09 05:26:14.980793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:31:23.576 spare 01:31:23.576 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.576 05:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 01:31:23.576 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.576 05:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.576 [2024-12-09 05:26:14.989687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:31:23.576 [2024-12-09 05:26:14.992193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:31:23.576 [2024-12-09 05:26:14.992289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:31:23.576 [2024-12-09 05:26:14.992360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:31:23.576 [2024-12-09 05:26:14.992504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:31:23.576 [2024-12-09 05:26:14.992524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 01:31:23.576 [2024-12-09 05:26:14.992820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:31:23.576 [2024-12-09 05:26:14.999389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:31:23.576 [2024-12-09 05:26:14.999574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:31:23.576 [2024-12-09 05:26:14.999994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:23.576 "name": "raid_bdev1", 01:31:23.576 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:23.576 "strip_size_kb": 64, 01:31:23.576 "state": "online", 01:31:23.576 "raid_level": "raid5f", 01:31:23.576 "superblock": false, 01:31:23.576 "num_base_bdevs": 4, 01:31:23.576 "num_base_bdevs_discovered": 4, 01:31:23.576 "num_base_bdevs_operational": 4, 01:31:23.576 "base_bdevs_list": [ 01:31:23.576 { 01:31:23.576 "name": "BaseBdev1", 01:31:23.576 "uuid": "a333de63-e7ed-5e3f-8b94-7183c1b76e88", 01:31:23.576 "is_configured": true, 01:31:23.576 "data_offset": 0, 01:31:23.576 "data_size": 65536 01:31:23.576 }, 01:31:23.576 { 01:31:23.576 "name": "BaseBdev2", 01:31:23.576 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:23.576 "is_configured": true, 01:31:23.576 "data_offset": 0, 01:31:23.576 "data_size": 65536 01:31:23.576 }, 01:31:23.576 { 01:31:23.576 "name": "BaseBdev3", 01:31:23.576 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:23.576 "is_configured": true, 01:31:23.576 "data_offset": 0, 01:31:23.576 "data_size": 65536 01:31:23.576 }, 01:31:23.576 { 01:31:23.576 "name": "BaseBdev4", 01:31:23.576 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:23.576 "is_configured": true, 01:31:23.576 "data_offset": 0, 01:31:23.576 "data_size": 65536 01:31:23.576 } 01:31:23.576 ] 01:31:23.576 }' 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:23.576 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:24.141 [2024-12-09 05:26:15.545193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:31:24.141 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:31:24.400 [2024-12-09 05:26:15.941109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:31:24.400 /dev/nbd0 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:31:24.400 1+0 records in 01:31:24.400 1+0 records out 01:31:24.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334574 s, 12.2 MB/s 01:31:24.400 05:26:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 01:31:24.400 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 01:31:25.335 512+0 records in 01:31:25.335 512+0 records out 01:31:25.335 100663296 bytes (101 MB, 96 MiB) copied, 0.614515 s, 164 MB/s 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:31:25.335 [2024-12-09 05:26:16.923086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:25.335 [2024-12-09 05:26:16.942678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:25.335 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:25.594 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:25.594 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:25.594 05:26:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:25.594 05:26:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:25.594 "name": "raid_bdev1", 01:31:25.594 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:25.594 "strip_size_kb": 64, 01:31:25.594 "state": "online", 01:31:25.594 "raid_level": "raid5f", 01:31:25.594 "superblock": false, 01:31:25.594 "num_base_bdevs": 4, 01:31:25.594 "num_base_bdevs_discovered": 3, 01:31:25.594 "num_base_bdevs_operational": 3, 01:31:25.594 "base_bdevs_list": [ 01:31:25.594 { 01:31:25.594 "name": null, 01:31:25.594 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:25.594 "is_configured": false, 01:31:25.594 "data_offset": 0, 01:31:25.594 "data_size": 65536 01:31:25.594 }, 01:31:25.594 { 01:31:25.594 "name": "BaseBdev2", 01:31:25.594 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:25.594 "is_configured": true, 01:31:25.594 "data_offset": 0, 01:31:25.594 "data_size": 65536 01:31:25.594 }, 01:31:25.594 { 01:31:25.594 "name": "BaseBdev3", 01:31:25.594 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:25.594 "is_configured": true, 01:31:25.594 "data_offset": 0, 01:31:25.594 "data_size": 65536 01:31:25.594 }, 01:31:25.594 { 01:31:25.594 "name": "BaseBdev4", 01:31:25.594 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:25.594 "is_configured": true, 01:31:25.594 "data_offset": 0, 01:31:25.594 "data_size": 65536 01:31:25.594 } 01:31:25.594 ] 01:31:25.594 }' 01:31:25.594 05:26:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:25.594 05:26:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:25.852 05:26:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:31:25.852 05:26:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:25.852 05:26:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:25.852 [2024-12-09 05:26:17.458872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:31:26.111 [2024-12-09 05:26:17.473159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 01:31:26.111 05:26:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:26.111 05:26:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 01:31:26.111 [2024-12-09 05:26:17.482021] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:27.090 "name": "raid_bdev1", 01:31:27.090 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:27.090 "strip_size_kb": 64, 01:31:27.090 "state": "online", 01:31:27.090 "raid_level": "raid5f", 01:31:27.090 "superblock": false, 01:31:27.090 "num_base_bdevs": 4, 01:31:27.090 "num_base_bdevs_discovered": 4, 01:31:27.090 "num_base_bdevs_operational": 4, 01:31:27.090 "process": { 01:31:27.090 "type": "rebuild", 01:31:27.090 "target": "spare", 01:31:27.090 "progress": { 01:31:27.090 "blocks": 17280, 01:31:27.090 "percent": 8 01:31:27.090 } 01:31:27.090 }, 01:31:27.090 "base_bdevs_list": [ 01:31:27.090 { 01:31:27.090 "name": "spare", 01:31:27.090 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:27.090 "is_configured": true, 01:31:27.090 "data_offset": 0, 01:31:27.090 "data_size": 65536 01:31:27.090 }, 01:31:27.090 { 01:31:27.090 "name": "BaseBdev2", 01:31:27.090 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:27.090 "is_configured": true, 01:31:27.090 "data_offset": 0, 01:31:27.090 "data_size": 65536 01:31:27.090 }, 01:31:27.090 { 01:31:27.090 "name": "BaseBdev3", 01:31:27.090 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:27.090 "is_configured": true, 01:31:27.090 "data_offset": 0, 01:31:27.090 "data_size": 65536 01:31:27.090 }, 01:31:27.090 { 01:31:27.090 "name": "BaseBdev4", 01:31:27.090 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:27.090 "is_configured": true, 01:31:27.090 "data_offset": 0, 01:31:27.090 "data_size": 65536 01:31:27.090 } 01:31:27.090 ] 01:31:27.090 }' 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:27.090 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:27.091 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:27.091 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:31:27.091 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:27.091 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:27.091 [2024-12-09 05:26:18.647346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:31:27.091 [2024-12-09 05:26:18.695474] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:31:27.091 [2024-12-09 05:26:18.695774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:27.091 [2024-12-09 05:26:18.695807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:31:27.091 [2024-12-09 05:26:18.695829] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:27.350 "name": "raid_bdev1", 01:31:27.350 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:27.350 "strip_size_kb": 64, 01:31:27.350 "state": "online", 01:31:27.350 "raid_level": "raid5f", 01:31:27.350 "superblock": false, 01:31:27.350 "num_base_bdevs": 4, 01:31:27.350 "num_base_bdevs_discovered": 3, 01:31:27.350 "num_base_bdevs_operational": 3, 01:31:27.350 "base_bdevs_list": [ 01:31:27.350 { 01:31:27.350 "name": null, 01:31:27.350 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:27.350 "is_configured": false, 01:31:27.350 "data_offset": 0, 01:31:27.350 "data_size": 65536 01:31:27.350 }, 01:31:27.350 { 01:31:27.350 "name": "BaseBdev2", 01:31:27.350 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:27.350 "is_configured": true, 01:31:27.350 "data_offset": 0, 01:31:27.350 "data_size": 65536 01:31:27.350 }, 01:31:27.350 { 01:31:27.350 "name": "BaseBdev3", 01:31:27.350 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:27.350 "is_configured": true, 01:31:27.350 "data_offset": 0, 01:31:27.350 "data_size": 65536 01:31:27.350 }, 01:31:27.350 { 01:31:27.350 "name": "BaseBdev4", 01:31:27.350 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:27.350 "is_configured": true, 01:31:27.350 "data_offset": 0, 01:31:27.350 "data_size": 65536 01:31:27.350 } 01:31:27.350 ] 01:31:27.350 }' 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:27.350 05:26:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:27.918 "name": "raid_bdev1", 01:31:27.918 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:27.918 "strip_size_kb": 64, 01:31:27.918 "state": "online", 01:31:27.918 "raid_level": "raid5f", 01:31:27.918 "superblock": false, 01:31:27.918 "num_base_bdevs": 4, 01:31:27.918 "num_base_bdevs_discovered": 3, 01:31:27.918 "num_base_bdevs_operational": 3, 01:31:27.918 "base_bdevs_list": [ 01:31:27.918 { 01:31:27.918 "name": null, 01:31:27.918 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:27.918 "is_configured": false, 01:31:27.918 "data_offset": 0, 01:31:27.918 "data_size": 65536 01:31:27.918 }, 01:31:27.918 { 01:31:27.918 "name": "BaseBdev2", 01:31:27.918 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:27.918 "is_configured": true, 01:31:27.918 "data_offset": 0, 01:31:27.918 "data_size": 65536 01:31:27.918 }, 01:31:27.918 { 01:31:27.918 "name": "BaseBdev3", 01:31:27.918 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:27.918 "is_configured": true, 01:31:27.918 "data_offset": 0, 01:31:27.918 "data_size": 65536 01:31:27.918 }, 01:31:27.918 { 01:31:27.918 "name": "BaseBdev4", 01:31:27.918 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:27.918 "is_configured": true, 01:31:27.918 "data_offset": 0, 01:31:27.918 "data_size": 65536 01:31:27.918 } 01:31:27.918 ] 01:31:27.918 }' 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:27.918 [2024-12-09 05:26:19.408350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:31:27.918 [2024-12-09 05:26:19.423492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:27.918 05:26:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 01:31:27.918 [2024-12-09 05:26:19.434533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:28.851 05:26:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:29.110 "name": "raid_bdev1", 01:31:29.110 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:29.110 "strip_size_kb": 64, 01:31:29.110 "state": "online", 01:31:29.110 "raid_level": "raid5f", 01:31:29.110 "superblock": false, 01:31:29.110 "num_base_bdevs": 4, 01:31:29.110 "num_base_bdevs_discovered": 4, 01:31:29.110 "num_base_bdevs_operational": 4, 01:31:29.110 "process": { 01:31:29.110 "type": "rebuild", 01:31:29.110 "target": "spare", 01:31:29.110 "progress": { 01:31:29.110 "blocks": 17280, 01:31:29.110 "percent": 8 01:31:29.110 } 01:31:29.110 }, 01:31:29.110 "base_bdevs_list": [ 01:31:29.110 { 01:31:29.110 "name": "spare", 01:31:29.110 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 }, 01:31:29.110 { 01:31:29.110 "name": "BaseBdev2", 01:31:29.110 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 }, 01:31:29.110 { 01:31:29.110 "name": "BaseBdev3", 01:31:29.110 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 }, 01:31:29.110 { 01:31:29.110 "name": "BaseBdev4", 01:31:29.110 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 } 01:31:29.110 ] 01:31:29.110 }' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=682 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:29.110 "name": "raid_bdev1", 01:31:29.110 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:29.110 "strip_size_kb": 64, 01:31:29.110 "state": "online", 01:31:29.110 "raid_level": "raid5f", 01:31:29.110 "superblock": false, 01:31:29.110 "num_base_bdevs": 4, 01:31:29.110 "num_base_bdevs_discovered": 4, 01:31:29.110 "num_base_bdevs_operational": 4, 01:31:29.110 "process": { 01:31:29.110 "type": "rebuild", 01:31:29.110 "target": "spare", 01:31:29.110 "progress": { 01:31:29.110 "blocks": 21120, 01:31:29.110 "percent": 10 01:31:29.110 } 01:31:29.110 }, 01:31:29.110 "base_bdevs_list": [ 01:31:29.110 { 01:31:29.110 "name": "spare", 01:31:29.110 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 }, 01:31:29.110 { 01:31:29.110 "name": "BaseBdev2", 01:31:29.110 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 }, 01:31:29.110 { 01:31:29.110 "name": "BaseBdev3", 01:31:29.110 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 }, 01:31:29.110 { 01:31:29.110 "name": "BaseBdev4", 01:31:29.110 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:29.110 "is_configured": true, 01:31:29.110 "data_offset": 0, 01:31:29.110 "data_size": 65536 01:31:29.110 } 01:31:29.110 ] 01:31:29.110 }' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:29.110 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:29.369 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:29.369 05:26:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:30.301 "name": "raid_bdev1", 01:31:30.301 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:30.301 "strip_size_kb": 64, 01:31:30.301 "state": "online", 01:31:30.301 "raid_level": "raid5f", 01:31:30.301 "superblock": false, 01:31:30.301 "num_base_bdevs": 4, 01:31:30.301 "num_base_bdevs_discovered": 4, 01:31:30.301 "num_base_bdevs_operational": 4, 01:31:30.301 "process": { 01:31:30.301 "type": "rebuild", 01:31:30.301 "target": "spare", 01:31:30.301 "progress": { 01:31:30.301 "blocks": 42240, 01:31:30.301 "percent": 21 01:31:30.301 } 01:31:30.301 }, 01:31:30.301 "base_bdevs_list": [ 01:31:30.301 { 01:31:30.301 "name": "spare", 01:31:30.301 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:30.301 "is_configured": true, 01:31:30.301 "data_offset": 0, 01:31:30.301 "data_size": 65536 01:31:30.301 }, 01:31:30.301 { 01:31:30.301 "name": "BaseBdev2", 01:31:30.301 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:30.301 "is_configured": true, 01:31:30.301 "data_offset": 0, 01:31:30.301 "data_size": 65536 01:31:30.301 }, 01:31:30.301 { 01:31:30.301 "name": "BaseBdev3", 01:31:30.301 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:30.301 "is_configured": true, 01:31:30.301 "data_offset": 0, 01:31:30.301 "data_size": 65536 01:31:30.301 }, 01:31:30.301 { 01:31:30.301 "name": "BaseBdev4", 01:31:30.301 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:30.301 "is_configured": true, 01:31:30.301 "data_offset": 0, 01:31:30.301 "data_size": 65536 01:31:30.301 } 01:31:30.301 ] 01:31:30.301 }' 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:30.301 05:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:31.674 "name": "raid_bdev1", 01:31:31.674 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:31.674 "strip_size_kb": 64, 01:31:31.674 "state": "online", 01:31:31.674 "raid_level": "raid5f", 01:31:31.674 "superblock": false, 01:31:31.674 "num_base_bdevs": 4, 01:31:31.674 "num_base_bdevs_discovered": 4, 01:31:31.674 "num_base_bdevs_operational": 4, 01:31:31.674 "process": { 01:31:31.674 "type": "rebuild", 01:31:31.674 "target": "spare", 01:31:31.674 "progress": { 01:31:31.674 "blocks": 65280, 01:31:31.674 "percent": 33 01:31:31.674 } 01:31:31.674 }, 01:31:31.674 "base_bdevs_list": [ 01:31:31.674 { 01:31:31.674 "name": "spare", 01:31:31.674 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:31.674 "is_configured": true, 01:31:31.674 "data_offset": 0, 01:31:31.674 "data_size": 65536 01:31:31.674 }, 01:31:31.674 { 01:31:31.674 "name": "BaseBdev2", 01:31:31.674 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:31.674 "is_configured": true, 01:31:31.674 "data_offset": 0, 01:31:31.674 "data_size": 65536 01:31:31.674 }, 01:31:31.674 { 01:31:31.674 "name": "BaseBdev3", 01:31:31.674 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:31.674 "is_configured": true, 01:31:31.674 "data_offset": 0, 01:31:31.674 "data_size": 65536 01:31:31.674 }, 01:31:31.674 { 01:31:31.674 "name": "BaseBdev4", 01:31:31.674 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:31.674 "is_configured": true, 01:31:31.674 "data_offset": 0, 01:31:31.674 "data_size": 65536 01:31:31.674 } 01:31:31.674 ] 01:31:31.674 }' 01:31:31.674 05:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:31.674 05:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:31.674 05:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:31.674 05:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:31.674 05:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:32.607 "name": "raid_bdev1", 01:31:32.607 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:32.607 "strip_size_kb": 64, 01:31:32.607 "state": "online", 01:31:32.607 "raid_level": "raid5f", 01:31:32.607 "superblock": false, 01:31:32.607 "num_base_bdevs": 4, 01:31:32.607 "num_base_bdevs_discovered": 4, 01:31:32.607 "num_base_bdevs_operational": 4, 01:31:32.607 "process": { 01:31:32.607 "type": "rebuild", 01:31:32.607 "target": "spare", 01:31:32.607 "progress": { 01:31:32.607 "blocks": 88320, 01:31:32.607 "percent": 44 01:31:32.607 } 01:31:32.607 }, 01:31:32.607 "base_bdevs_list": [ 01:31:32.607 { 01:31:32.607 "name": "spare", 01:31:32.607 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:32.607 "is_configured": true, 01:31:32.607 "data_offset": 0, 01:31:32.607 "data_size": 65536 01:31:32.607 }, 01:31:32.607 { 01:31:32.607 "name": "BaseBdev2", 01:31:32.607 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:32.607 "is_configured": true, 01:31:32.607 "data_offset": 0, 01:31:32.607 "data_size": 65536 01:31:32.607 }, 01:31:32.607 { 01:31:32.607 "name": "BaseBdev3", 01:31:32.607 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:32.607 "is_configured": true, 01:31:32.607 "data_offset": 0, 01:31:32.607 "data_size": 65536 01:31:32.607 }, 01:31:32.607 { 01:31:32.607 "name": "BaseBdev4", 01:31:32.607 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:32.607 "is_configured": true, 01:31:32.607 "data_offset": 0, 01:31:32.607 "data_size": 65536 01:31:32.607 } 01:31:32.607 ] 01:31:32.607 }' 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:32.607 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:32.865 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:32.865 05:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:33.828 "name": "raid_bdev1", 01:31:33.828 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:33.828 "strip_size_kb": 64, 01:31:33.828 "state": "online", 01:31:33.828 "raid_level": "raid5f", 01:31:33.828 "superblock": false, 01:31:33.828 "num_base_bdevs": 4, 01:31:33.828 "num_base_bdevs_discovered": 4, 01:31:33.828 "num_base_bdevs_operational": 4, 01:31:33.828 "process": { 01:31:33.828 "type": "rebuild", 01:31:33.828 "target": "spare", 01:31:33.828 "progress": { 01:31:33.828 "blocks": 109440, 01:31:33.828 "percent": 55 01:31:33.828 } 01:31:33.828 }, 01:31:33.828 "base_bdevs_list": [ 01:31:33.828 { 01:31:33.828 "name": "spare", 01:31:33.828 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:33.828 "is_configured": true, 01:31:33.828 "data_offset": 0, 01:31:33.828 "data_size": 65536 01:31:33.828 }, 01:31:33.828 { 01:31:33.828 "name": "BaseBdev2", 01:31:33.828 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:33.828 "is_configured": true, 01:31:33.828 "data_offset": 0, 01:31:33.828 "data_size": 65536 01:31:33.828 }, 01:31:33.828 { 01:31:33.828 "name": "BaseBdev3", 01:31:33.828 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:33.828 "is_configured": true, 01:31:33.828 "data_offset": 0, 01:31:33.828 "data_size": 65536 01:31:33.828 }, 01:31:33.828 { 01:31:33.828 "name": "BaseBdev4", 01:31:33.828 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:33.828 "is_configured": true, 01:31:33.828 "data_offset": 0, 01:31:33.828 "data_size": 65536 01:31:33.828 } 01:31:33.828 ] 01:31:33.828 }' 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:33.828 05:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:35.202 "name": "raid_bdev1", 01:31:35.202 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:35.202 "strip_size_kb": 64, 01:31:35.202 "state": "online", 01:31:35.202 "raid_level": "raid5f", 01:31:35.202 "superblock": false, 01:31:35.202 "num_base_bdevs": 4, 01:31:35.202 "num_base_bdevs_discovered": 4, 01:31:35.202 "num_base_bdevs_operational": 4, 01:31:35.202 "process": { 01:31:35.202 "type": "rebuild", 01:31:35.202 "target": "spare", 01:31:35.202 "progress": { 01:31:35.202 "blocks": 132480, 01:31:35.202 "percent": 67 01:31:35.202 } 01:31:35.202 }, 01:31:35.202 "base_bdevs_list": [ 01:31:35.202 { 01:31:35.202 "name": "spare", 01:31:35.202 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:35.202 "is_configured": true, 01:31:35.202 "data_offset": 0, 01:31:35.202 "data_size": 65536 01:31:35.202 }, 01:31:35.202 { 01:31:35.202 "name": "BaseBdev2", 01:31:35.202 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:35.202 "is_configured": true, 01:31:35.202 "data_offset": 0, 01:31:35.202 "data_size": 65536 01:31:35.202 }, 01:31:35.202 { 01:31:35.202 "name": "BaseBdev3", 01:31:35.202 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:35.202 "is_configured": true, 01:31:35.202 "data_offset": 0, 01:31:35.202 "data_size": 65536 01:31:35.202 }, 01:31:35.202 { 01:31:35.202 "name": "BaseBdev4", 01:31:35.202 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:35.202 "is_configured": true, 01:31:35.202 "data_offset": 0, 01:31:35.202 "data_size": 65536 01:31:35.202 } 01:31:35.202 ] 01:31:35.202 }' 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:35.202 05:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:36.138 "name": "raid_bdev1", 01:31:36.138 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:36.138 "strip_size_kb": 64, 01:31:36.138 "state": "online", 01:31:36.138 "raid_level": "raid5f", 01:31:36.138 "superblock": false, 01:31:36.138 "num_base_bdevs": 4, 01:31:36.138 "num_base_bdevs_discovered": 4, 01:31:36.138 "num_base_bdevs_operational": 4, 01:31:36.138 "process": { 01:31:36.138 "type": "rebuild", 01:31:36.138 "target": "spare", 01:31:36.138 "progress": { 01:31:36.138 "blocks": 153600, 01:31:36.138 "percent": 78 01:31:36.138 } 01:31:36.138 }, 01:31:36.138 "base_bdevs_list": [ 01:31:36.138 { 01:31:36.138 "name": "spare", 01:31:36.138 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:36.138 "is_configured": true, 01:31:36.138 "data_offset": 0, 01:31:36.138 "data_size": 65536 01:31:36.138 }, 01:31:36.138 { 01:31:36.138 "name": "BaseBdev2", 01:31:36.138 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:36.138 "is_configured": true, 01:31:36.138 "data_offset": 0, 01:31:36.138 "data_size": 65536 01:31:36.138 }, 01:31:36.138 { 01:31:36.138 "name": "BaseBdev3", 01:31:36.138 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:36.138 "is_configured": true, 01:31:36.138 "data_offset": 0, 01:31:36.138 "data_size": 65536 01:31:36.138 }, 01:31:36.138 { 01:31:36.138 "name": "BaseBdev4", 01:31:36.138 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:36.138 "is_configured": true, 01:31:36.138 "data_offset": 0, 01:31:36.138 "data_size": 65536 01:31:36.138 } 01:31:36.138 ] 01:31:36.138 }' 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:36.138 05:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:37.513 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:37.513 "name": "raid_bdev1", 01:31:37.513 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:37.513 "strip_size_kb": 64, 01:31:37.513 "state": "online", 01:31:37.513 "raid_level": "raid5f", 01:31:37.513 "superblock": false, 01:31:37.513 "num_base_bdevs": 4, 01:31:37.513 "num_base_bdevs_discovered": 4, 01:31:37.513 "num_base_bdevs_operational": 4, 01:31:37.513 "process": { 01:31:37.513 "type": "rebuild", 01:31:37.513 "target": "spare", 01:31:37.513 "progress": { 01:31:37.513 "blocks": 176640, 01:31:37.513 "percent": 89 01:31:37.513 } 01:31:37.513 }, 01:31:37.513 "base_bdevs_list": [ 01:31:37.513 { 01:31:37.513 "name": "spare", 01:31:37.513 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:37.514 "is_configured": true, 01:31:37.514 "data_offset": 0, 01:31:37.514 "data_size": 65536 01:31:37.514 }, 01:31:37.514 { 01:31:37.514 "name": "BaseBdev2", 01:31:37.514 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:37.514 "is_configured": true, 01:31:37.514 "data_offset": 0, 01:31:37.514 "data_size": 65536 01:31:37.514 }, 01:31:37.514 { 01:31:37.514 "name": "BaseBdev3", 01:31:37.514 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:37.514 "is_configured": true, 01:31:37.514 "data_offset": 0, 01:31:37.514 "data_size": 65536 01:31:37.514 }, 01:31:37.514 { 01:31:37.514 "name": "BaseBdev4", 01:31:37.514 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:37.514 "is_configured": true, 01:31:37.514 "data_offset": 0, 01:31:37.514 "data_size": 65536 01:31:37.514 } 01:31:37.514 ] 01:31:37.514 }' 01:31:37.514 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:37.514 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:37.514 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:37.514 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:37.514 05:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:38.446 [2024-12-09 05:26:29.830172] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:31:38.446 [2024-12-09 05:26:29.830514] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:31:38.446 [2024-12-09 05:26:29.830720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:38.446 "name": "raid_bdev1", 01:31:38.446 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:38.446 "strip_size_kb": 64, 01:31:38.446 "state": "online", 01:31:38.446 "raid_level": "raid5f", 01:31:38.446 "superblock": false, 01:31:38.446 "num_base_bdevs": 4, 01:31:38.446 "num_base_bdevs_discovered": 4, 01:31:38.446 "num_base_bdevs_operational": 4, 01:31:38.446 "base_bdevs_list": [ 01:31:38.446 { 01:31:38.446 "name": "spare", 01:31:38.446 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:38.446 "is_configured": true, 01:31:38.446 "data_offset": 0, 01:31:38.446 "data_size": 65536 01:31:38.446 }, 01:31:38.446 { 01:31:38.446 "name": "BaseBdev2", 01:31:38.446 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:38.446 "is_configured": true, 01:31:38.446 "data_offset": 0, 01:31:38.446 "data_size": 65536 01:31:38.446 }, 01:31:38.446 { 01:31:38.446 "name": "BaseBdev3", 01:31:38.446 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:38.446 "is_configured": true, 01:31:38.446 "data_offset": 0, 01:31:38.446 "data_size": 65536 01:31:38.446 }, 01:31:38.446 { 01:31:38.446 "name": "BaseBdev4", 01:31:38.446 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:38.446 "is_configured": true, 01:31:38.446 "data_offset": 0, 01:31:38.446 "data_size": 65536 01:31:38.446 } 01:31:38.446 ] 01:31:38.446 }' 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:31:38.446 05:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:38.446 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:38.704 "name": "raid_bdev1", 01:31:38.704 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:38.704 "strip_size_kb": 64, 01:31:38.704 "state": "online", 01:31:38.704 "raid_level": "raid5f", 01:31:38.704 "superblock": false, 01:31:38.704 "num_base_bdevs": 4, 01:31:38.704 "num_base_bdevs_discovered": 4, 01:31:38.704 "num_base_bdevs_operational": 4, 01:31:38.704 "base_bdevs_list": [ 01:31:38.704 { 01:31:38.704 "name": "spare", 01:31:38.704 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 }, 01:31:38.704 { 01:31:38.704 "name": "BaseBdev2", 01:31:38.704 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 }, 01:31:38.704 { 01:31:38.704 "name": "BaseBdev3", 01:31:38.704 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 }, 01:31:38.704 { 01:31:38.704 "name": "BaseBdev4", 01:31:38.704 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 } 01:31:38.704 ] 01:31:38.704 }' 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:38.704 "name": "raid_bdev1", 01:31:38.704 "uuid": "9ac1dc23-ace5-453e-8430-c36157df2cbc", 01:31:38.704 "strip_size_kb": 64, 01:31:38.704 "state": "online", 01:31:38.704 "raid_level": "raid5f", 01:31:38.704 "superblock": false, 01:31:38.704 "num_base_bdevs": 4, 01:31:38.704 "num_base_bdevs_discovered": 4, 01:31:38.704 "num_base_bdevs_operational": 4, 01:31:38.704 "base_bdevs_list": [ 01:31:38.704 { 01:31:38.704 "name": "spare", 01:31:38.704 "uuid": "576958e7-f9f2-5d49-a4f2-ec48d39b5056", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 }, 01:31:38.704 { 01:31:38.704 "name": "BaseBdev2", 01:31:38.704 "uuid": "2e8dfd4a-4996-54f1-bcf1-b1307e618118", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 }, 01:31:38.704 { 01:31:38.704 "name": "BaseBdev3", 01:31:38.704 "uuid": "e7d0d07d-7e62-5dcf-afa5-a5c341d2ee40", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 }, 01:31:38.704 { 01:31:38.704 "name": "BaseBdev4", 01:31:38.704 "uuid": "8cd0a785-69b8-5e23-8c46-9018c6ce1efa", 01:31:38.704 "is_configured": true, 01:31:38.704 "data_offset": 0, 01:31:38.704 "data_size": 65536 01:31:38.704 } 01:31:38.704 ] 01:31:38.704 }' 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:38.704 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:39.269 [2024-12-09 05:26:30.754285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:39.269 [2024-12-09 05:26:30.754525] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:39.269 [2024-12-09 05:26:30.754807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:39.269 [2024-12-09 05:26:30.754948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:39.269 [2024-12-09 05:26:30.754968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:31:39.269 05:26:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:31:39.537 /dev/nbd0 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:31:39.537 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:31:39.801 1+0 records in 01:31:39.801 1+0 records out 01:31:39.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565143 s, 7.2 MB/s 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:31:39.801 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:31:40.059 /dev/nbd1 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:31:40.059 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:31:40.060 1+0 records in 01:31:40.060 1+0 records out 01:31:40.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414377 s, 9.9 MB/s 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:31:40.060 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:31:40.317 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:31:40.575 05:26:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84955 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84955 ']' 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84955 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84955 01:31:40.833 killing process with pid 84955 01:31:40.833 Received shutdown signal, test time was about 60.000000 seconds 01:31:40.833 01:31:40.833 Latency(us) 01:31:40.833 [2024-12-09T05:26:32.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:40.833 [2024-12-09T05:26:32.450Z] =================================================================================================================== 01:31:40.833 [2024-12-09T05:26:32.450Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84955' 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84955 01:31:40.833 [2024-12-09 05:26:32.334722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:31:40.833 05:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84955 01:31:41.400 [2024-12-09 05:26:32.761348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:31:42.336 ************************************ 01:31:42.336 END TEST raid5f_rebuild_test 01:31:42.336 ************************************ 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 01:31:42.336 01:31:42.336 real 0m20.253s 01:31:42.336 user 0m25.195s 01:31:42.336 sys 0m2.299s 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 01:31:42.336 05:26:33 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 01:31:42.336 05:26:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:31:42.336 05:26:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:42.336 05:26:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:31:42.336 ************************************ 01:31:42.336 START TEST raid5f_rebuild_test_sb 01:31:42.336 ************************************ 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 01:31:42.336 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85464 01:31:42.337 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:31:42.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85464 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85464 ']' 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:42.595 05:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:42.595 [2024-12-09 05:26:34.065718] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:42.595 I/O size of 3145728 is greater than zero copy threshold (65536). 01:31:42.595 Zero copy mechanism will not be used. 01:31:42.595 [2024-12-09 05:26:34.066224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85464 ] 01:31:42.854 [2024-12-09 05:26:34.255505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:42.854 [2024-12-09 05:26:34.383714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:43.113 [2024-12-09 05:26:34.589231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:43.113 [2024-12-09 05:26:34.589276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.691 BaseBdev1_malloc 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.691 [2024-12-09 05:26:35.100034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:31:43.691 [2024-12-09 05:26:35.100316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:43.691 [2024-12-09 05:26:35.100370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:31:43.691 [2024-12-09 05:26:35.100394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:43.691 [2024-12-09 05:26:35.103442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:43.691 [2024-12-09 05:26:35.103503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:31:43.691 BaseBdev1 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.691 BaseBdev2_malloc 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.691 [2024-12-09 05:26:35.152291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:31:43.691 [2024-12-09 05:26:35.152605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:43.691 [2024-12-09 05:26:35.152779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:31:43.691 [2024-12-09 05:26:35.152810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:43.691 [2024-12-09 05:26:35.155780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:43.691 [2024-12-09 05:26:35.155841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:31:43.691 BaseBdev2 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 01:31:43.691 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.692 BaseBdev3_malloc 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.692 [2024-12-09 05:26:35.216601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 01:31:43.692 [2024-12-09 05:26:35.216851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:43.692 [2024-12-09 05:26:35.216930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:31:43.692 [2024-12-09 05:26:35.216958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:43.692 [2024-12-09 05:26:35.220036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:43.692 [2024-12-09 05:26:35.220130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 01:31:43.692 BaseBdev3 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.692 BaseBdev4_malloc 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.692 [2024-12-09 05:26:35.269675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 01:31:43.692 [2024-12-09 05:26:35.269945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:43.692 [2024-12-09 05:26:35.270021] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 01:31:43.692 [2024-12-09 05:26:35.270317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:43.692 [2024-12-09 05:26:35.273343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:43.692 [2024-12-09 05:26:35.273553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 01:31:43.692 BaseBdev4 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.692 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.956 spare_malloc 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.956 spare_delay 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.956 [2024-12-09 05:26:35.332067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:31:43.956 [2024-12-09 05:26:35.332280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:31:43.956 [2024-12-09 05:26:35.332347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:31:43.956 [2024-12-09 05:26:35.332569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:31:43.956 [2024-12-09 05:26:35.335621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:31:43.956 [2024-12-09 05:26:35.335872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:31:43.956 spare 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.956 [2024-12-09 05:26:35.340226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:31:43.956 [2024-12-09 05:26:35.343016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:31:43.956 [2024-12-09 05:26:35.343260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:31:43.956 [2024-12-09 05:26:35.343421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:31:43.956 [2024-12-09 05:26:35.343714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:31:43.956 [2024-12-09 05:26:35.343737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:31:43.956 [2024-12-09 05:26:35.344091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:31:43.956 [2024-12-09 05:26:35.351167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:31:43.956 [2024-12-09 05:26:35.351197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:31:43.956 [2024-12-09 05:26:35.351499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:43.956 "name": "raid_bdev1", 01:31:43.956 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:43.956 "strip_size_kb": 64, 01:31:43.956 "state": "online", 01:31:43.956 "raid_level": "raid5f", 01:31:43.956 "superblock": true, 01:31:43.956 "num_base_bdevs": 4, 01:31:43.956 "num_base_bdevs_discovered": 4, 01:31:43.956 "num_base_bdevs_operational": 4, 01:31:43.956 "base_bdevs_list": [ 01:31:43.956 { 01:31:43.956 "name": "BaseBdev1", 01:31:43.956 "uuid": "b2d0e468-768b-595b-838b-689d3800b94a", 01:31:43.956 "is_configured": true, 01:31:43.956 "data_offset": 2048, 01:31:43.956 "data_size": 63488 01:31:43.956 }, 01:31:43.956 { 01:31:43.956 "name": "BaseBdev2", 01:31:43.956 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:43.956 "is_configured": true, 01:31:43.956 "data_offset": 2048, 01:31:43.956 "data_size": 63488 01:31:43.956 }, 01:31:43.956 { 01:31:43.956 "name": "BaseBdev3", 01:31:43.956 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:43.956 "is_configured": true, 01:31:43.956 "data_offset": 2048, 01:31:43.956 "data_size": 63488 01:31:43.956 }, 01:31:43.956 { 01:31:43.956 "name": "BaseBdev4", 01:31:43.956 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:43.956 "is_configured": true, 01:31:43.956 "data_offset": 2048, 01:31:43.956 "data_size": 63488 01:31:43.956 } 01:31:43.956 ] 01:31:43.956 }' 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:43.956 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:31:44.521 [2024-12-09 05:26:35.875825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:31:44.521 05:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:31:44.779 [2024-12-09 05:26:36.259694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:31:44.779 /dev/nbd0 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:31:44.779 1+0 records in 01:31:44.779 1+0 records out 01:31:44.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666026 s, 6.1 MB/s 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 01:31:44.779 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 01:31:45.346 496+0 records in 01:31:45.346 496+0 records out 01:31:45.346 97517568 bytes (98 MB, 93 MiB) copied, 0.568449 s, 172 MB/s 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:31:45.346 05:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:31:45.605 [2024-12-09 05:26:37.211356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:45.605 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:45.864 [2024-12-09 05:26:37.223367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:45.864 "name": "raid_bdev1", 01:31:45.864 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:45.864 "strip_size_kb": 64, 01:31:45.864 "state": "online", 01:31:45.864 "raid_level": "raid5f", 01:31:45.864 "superblock": true, 01:31:45.864 "num_base_bdevs": 4, 01:31:45.864 "num_base_bdevs_discovered": 3, 01:31:45.864 "num_base_bdevs_operational": 3, 01:31:45.864 "base_bdevs_list": [ 01:31:45.864 { 01:31:45.864 "name": null, 01:31:45.864 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:45.864 "is_configured": false, 01:31:45.864 "data_offset": 0, 01:31:45.864 "data_size": 63488 01:31:45.864 }, 01:31:45.864 { 01:31:45.864 "name": "BaseBdev2", 01:31:45.864 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:45.864 "is_configured": true, 01:31:45.864 "data_offset": 2048, 01:31:45.864 "data_size": 63488 01:31:45.864 }, 01:31:45.864 { 01:31:45.864 "name": "BaseBdev3", 01:31:45.864 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:45.864 "is_configured": true, 01:31:45.864 "data_offset": 2048, 01:31:45.864 "data_size": 63488 01:31:45.864 }, 01:31:45.864 { 01:31:45.864 "name": "BaseBdev4", 01:31:45.864 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:45.864 "is_configured": true, 01:31:45.864 "data_offset": 2048, 01:31:45.864 "data_size": 63488 01:31:45.864 } 01:31:45.864 ] 01:31:45.864 }' 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:45.864 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:46.123 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:31:46.123 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:46.123 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:46.123 [2024-12-09 05:26:37.735669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:31:46.382 [2024-12-09 05:26:37.750229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 01:31:46.382 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:46.382 05:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 01:31:46.382 [2024-12-09 05:26:37.758949] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:47.319 "name": "raid_bdev1", 01:31:47.319 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:47.319 "strip_size_kb": 64, 01:31:47.319 "state": "online", 01:31:47.319 "raid_level": "raid5f", 01:31:47.319 "superblock": true, 01:31:47.319 "num_base_bdevs": 4, 01:31:47.319 "num_base_bdevs_discovered": 4, 01:31:47.319 "num_base_bdevs_operational": 4, 01:31:47.319 "process": { 01:31:47.319 "type": "rebuild", 01:31:47.319 "target": "spare", 01:31:47.319 "progress": { 01:31:47.319 "blocks": 17280, 01:31:47.319 "percent": 9 01:31:47.319 } 01:31:47.319 }, 01:31:47.319 "base_bdevs_list": [ 01:31:47.319 { 01:31:47.319 "name": "spare", 01:31:47.319 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:47.319 "is_configured": true, 01:31:47.319 "data_offset": 2048, 01:31:47.319 "data_size": 63488 01:31:47.319 }, 01:31:47.319 { 01:31:47.319 "name": "BaseBdev2", 01:31:47.319 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:47.319 "is_configured": true, 01:31:47.319 "data_offset": 2048, 01:31:47.319 "data_size": 63488 01:31:47.319 }, 01:31:47.319 { 01:31:47.319 "name": "BaseBdev3", 01:31:47.319 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:47.319 "is_configured": true, 01:31:47.319 "data_offset": 2048, 01:31:47.319 "data_size": 63488 01:31:47.319 }, 01:31:47.319 { 01:31:47.319 "name": "BaseBdev4", 01:31:47.319 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:47.319 "is_configured": true, 01:31:47.319 "data_offset": 2048, 01:31:47.319 "data_size": 63488 01:31:47.319 } 01:31:47.319 ] 01:31:47.319 }' 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:47.319 05:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:47.319 [2024-12-09 05:26:38.924405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:31:47.579 [2024-12-09 05:26:38.972458] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:31:47.579 [2024-12-09 05:26:38.972598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:47.579 [2024-12-09 05:26:38.972625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:31:47.579 [2024-12-09 05:26:38.972642] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:47.579 "name": "raid_bdev1", 01:31:47.579 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:47.579 "strip_size_kb": 64, 01:31:47.579 "state": "online", 01:31:47.579 "raid_level": "raid5f", 01:31:47.579 "superblock": true, 01:31:47.579 "num_base_bdevs": 4, 01:31:47.579 "num_base_bdevs_discovered": 3, 01:31:47.579 "num_base_bdevs_operational": 3, 01:31:47.579 "base_bdevs_list": [ 01:31:47.579 { 01:31:47.579 "name": null, 01:31:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:47.579 "is_configured": false, 01:31:47.579 "data_offset": 0, 01:31:47.579 "data_size": 63488 01:31:47.579 }, 01:31:47.579 { 01:31:47.579 "name": "BaseBdev2", 01:31:47.579 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:47.579 "is_configured": true, 01:31:47.579 "data_offset": 2048, 01:31:47.579 "data_size": 63488 01:31:47.579 }, 01:31:47.579 { 01:31:47.579 "name": "BaseBdev3", 01:31:47.579 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:47.579 "is_configured": true, 01:31:47.579 "data_offset": 2048, 01:31:47.579 "data_size": 63488 01:31:47.579 }, 01:31:47.579 { 01:31:47.579 "name": "BaseBdev4", 01:31:47.579 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:47.579 "is_configured": true, 01:31:47.579 "data_offset": 2048, 01:31:47.579 "data_size": 63488 01:31:47.579 } 01:31:47.579 ] 01:31:47.579 }' 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:47.579 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:48.145 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:48.145 "name": "raid_bdev1", 01:31:48.145 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:48.145 "strip_size_kb": 64, 01:31:48.145 "state": "online", 01:31:48.145 "raid_level": "raid5f", 01:31:48.145 "superblock": true, 01:31:48.145 "num_base_bdevs": 4, 01:31:48.145 "num_base_bdevs_discovered": 3, 01:31:48.145 "num_base_bdevs_operational": 3, 01:31:48.145 "base_bdevs_list": [ 01:31:48.145 { 01:31:48.145 "name": null, 01:31:48.145 "uuid": "00000000-0000-0000-0000-000000000000", 01:31:48.145 "is_configured": false, 01:31:48.145 "data_offset": 0, 01:31:48.145 "data_size": 63488 01:31:48.145 }, 01:31:48.145 { 01:31:48.145 "name": "BaseBdev2", 01:31:48.145 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:48.146 "is_configured": true, 01:31:48.146 "data_offset": 2048, 01:31:48.146 "data_size": 63488 01:31:48.146 }, 01:31:48.146 { 01:31:48.146 "name": "BaseBdev3", 01:31:48.146 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:48.146 "is_configured": true, 01:31:48.146 "data_offset": 2048, 01:31:48.146 "data_size": 63488 01:31:48.146 }, 01:31:48.146 { 01:31:48.146 "name": "BaseBdev4", 01:31:48.146 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:48.146 "is_configured": true, 01:31:48.146 "data_offset": 2048, 01:31:48.146 "data_size": 63488 01:31:48.146 } 01:31:48.146 ] 01:31:48.146 }' 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:48.146 [2024-12-09 05:26:39.700513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:31:48.146 [2024-12-09 05:26:39.714889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:48.146 05:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 01:31:48.146 [2024-12-09 05:26:39.724223] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:49.521 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:49.521 "name": "raid_bdev1", 01:31:49.521 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:49.521 "strip_size_kb": 64, 01:31:49.521 "state": "online", 01:31:49.521 "raid_level": "raid5f", 01:31:49.521 "superblock": true, 01:31:49.521 "num_base_bdevs": 4, 01:31:49.521 "num_base_bdevs_discovered": 4, 01:31:49.521 "num_base_bdevs_operational": 4, 01:31:49.521 "process": { 01:31:49.521 "type": "rebuild", 01:31:49.521 "target": "spare", 01:31:49.521 "progress": { 01:31:49.521 "blocks": 17280, 01:31:49.521 "percent": 9 01:31:49.521 } 01:31:49.521 }, 01:31:49.521 "base_bdevs_list": [ 01:31:49.521 { 01:31:49.521 "name": "spare", 01:31:49.521 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:49.521 "is_configured": true, 01:31:49.521 "data_offset": 2048, 01:31:49.521 "data_size": 63488 01:31:49.521 }, 01:31:49.521 { 01:31:49.522 "name": "BaseBdev2", 01:31:49.522 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 }, 01:31:49.522 { 01:31:49.522 "name": "BaseBdev3", 01:31:49.522 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 }, 01:31:49.522 { 01:31:49.522 "name": "BaseBdev4", 01:31:49.522 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 } 01:31:49.522 ] 01:31:49.522 }' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:31:49.522 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=702 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:49.522 "name": "raid_bdev1", 01:31:49.522 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:49.522 "strip_size_kb": 64, 01:31:49.522 "state": "online", 01:31:49.522 "raid_level": "raid5f", 01:31:49.522 "superblock": true, 01:31:49.522 "num_base_bdevs": 4, 01:31:49.522 "num_base_bdevs_discovered": 4, 01:31:49.522 "num_base_bdevs_operational": 4, 01:31:49.522 "process": { 01:31:49.522 "type": "rebuild", 01:31:49.522 "target": "spare", 01:31:49.522 "progress": { 01:31:49.522 "blocks": 21120, 01:31:49.522 "percent": 11 01:31:49.522 } 01:31:49.522 }, 01:31:49.522 "base_bdevs_list": [ 01:31:49.522 { 01:31:49.522 "name": "spare", 01:31:49.522 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 }, 01:31:49.522 { 01:31:49.522 "name": "BaseBdev2", 01:31:49.522 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 }, 01:31:49.522 { 01:31:49.522 "name": "BaseBdev3", 01:31:49.522 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 }, 01:31:49.522 { 01:31:49.522 "name": "BaseBdev4", 01:31:49.522 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:49.522 "is_configured": true, 01:31:49.522 "data_offset": 2048, 01:31:49.522 "data_size": 63488 01:31:49.522 } 01:31:49.522 ] 01:31:49.522 }' 01:31:49.522 05:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:49.522 05:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:49.522 05:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:49.522 05:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:49.522 05:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:50.456 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:50.714 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:50.714 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:50.714 "name": "raid_bdev1", 01:31:50.714 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:50.714 "strip_size_kb": 64, 01:31:50.714 "state": "online", 01:31:50.714 "raid_level": "raid5f", 01:31:50.714 "superblock": true, 01:31:50.714 "num_base_bdevs": 4, 01:31:50.714 "num_base_bdevs_discovered": 4, 01:31:50.714 "num_base_bdevs_operational": 4, 01:31:50.714 "process": { 01:31:50.714 "type": "rebuild", 01:31:50.714 "target": "spare", 01:31:50.714 "progress": { 01:31:50.714 "blocks": 44160, 01:31:50.714 "percent": 23 01:31:50.714 } 01:31:50.714 }, 01:31:50.714 "base_bdevs_list": [ 01:31:50.714 { 01:31:50.714 "name": "spare", 01:31:50.714 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:50.714 "is_configured": true, 01:31:50.714 "data_offset": 2048, 01:31:50.714 "data_size": 63488 01:31:50.714 }, 01:31:50.714 { 01:31:50.714 "name": "BaseBdev2", 01:31:50.714 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:50.714 "is_configured": true, 01:31:50.714 "data_offset": 2048, 01:31:50.714 "data_size": 63488 01:31:50.714 }, 01:31:50.714 { 01:31:50.714 "name": "BaseBdev3", 01:31:50.714 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:50.714 "is_configured": true, 01:31:50.714 "data_offset": 2048, 01:31:50.714 "data_size": 63488 01:31:50.714 }, 01:31:50.714 { 01:31:50.714 "name": "BaseBdev4", 01:31:50.714 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:50.714 "is_configured": true, 01:31:50.714 "data_offset": 2048, 01:31:50.714 "data_size": 63488 01:31:50.715 } 01:31:50.715 ] 01:31:50.715 }' 01:31:50.715 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:50.715 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:50.715 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:50.715 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:50.715 05:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:51.651 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:51.930 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:51.930 "name": "raid_bdev1", 01:31:51.930 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:51.930 "strip_size_kb": 64, 01:31:51.930 "state": "online", 01:31:51.930 "raid_level": "raid5f", 01:31:51.930 "superblock": true, 01:31:51.930 "num_base_bdevs": 4, 01:31:51.930 "num_base_bdevs_discovered": 4, 01:31:51.930 "num_base_bdevs_operational": 4, 01:31:51.930 "process": { 01:31:51.930 "type": "rebuild", 01:31:51.930 "target": "spare", 01:31:51.930 "progress": { 01:31:51.930 "blocks": 65280, 01:31:51.930 "percent": 34 01:31:51.930 } 01:31:51.930 }, 01:31:51.930 "base_bdevs_list": [ 01:31:51.930 { 01:31:51.930 "name": "spare", 01:31:51.930 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:51.930 "is_configured": true, 01:31:51.930 "data_offset": 2048, 01:31:51.930 "data_size": 63488 01:31:51.930 }, 01:31:51.930 { 01:31:51.930 "name": "BaseBdev2", 01:31:51.930 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:51.930 "is_configured": true, 01:31:51.930 "data_offset": 2048, 01:31:51.930 "data_size": 63488 01:31:51.930 }, 01:31:51.930 { 01:31:51.930 "name": "BaseBdev3", 01:31:51.930 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:51.930 "is_configured": true, 01:31:51.930 "data_offset": 2048, 01:31:51.930 "data_size": 63488 01:31:51.930 }, 01:31:51.930 { 01:31:51.930 "name": "BaseBdev4", 01:31:51.930 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:51.930 "is_configured": true, 01:31:51.930 "data_offset": 2048, 01:31:51.930 "data_size": 63488 01:31:51.930 } 01:31:51.930 ] 01:31:51.930 }' 01:31:51.930 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:51.930 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:51.930 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:51.930 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:51.930 05:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:52.866 "name": "raid_bdev1", 01:31:52.866 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:52.866 "strip_size_kb": 64, 01:31:52.866 "state": "online", 01:31:52.866 "raid_level": "raid5f", 01:31:52.866 "superblock": true, 01:31:52.866 "num_base_bdevs": 4, 01:31:52.866 "num_base_bdevs_discovered": 4, 01:31:52.866 "num_base_bdevs_operational": 4, 01:31:52.866 "process": { 01:31:52.866 "type": "rebuild", 01:31:52.866 "target": "spare", 01:31:52.866 "progress": { 01:31:52.866 "blocks": 88320, 01:31:52.866 "percent": 46 01:31:52.866 } 01:31:52.866 }, 01:31:52.866 "base_bdevs_list": [ 01:31:52.866 { 01:31:52.866 "name": "spare", 01:31:52.866 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:52.866 "is_configured": true, 01:31:52.866 "data_offset": 2048, 01:31:52.866 "data_size": 63488 01:31:52.866 }, 01:31:52.866 { 01:31:52.866 "name": "BaseBdev2", 01:31:52.866 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:52.866 "is_configured": true, 01:31:52.866 "data_offset": 2048, 01:31:52.866 "data_size": 63488 01:31:52.866 }, 01:31:52.866 { 01:31:52.866 "name": "BaseBdev3", 01:31:52.866 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:52.866 "is_configured": true, 01:31:52.866 "data_offset": 2048, 01:31:52.866 "data_size": 63488 01:31:52.866 }, 01:31:52.866 { 01:31:52.866 "name": "BaseBdev4", 01:31:52.866 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:52.866 "is_configured": true, 01:31:52.866 "data_offset": 2048, 01:31:52.866 "data_size": 63488 01:31:52.866 } 01:31:52.866 ] 01:31:52.866 }' 01:31:52.866 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:53.226 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:53.226 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:53.226 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:53.226 05:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:54.161 "name": "raid_bdev1", 01:31:54.161 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:54.161 "strip_size_kb": 64, 01:31:54.161 "state": "online", 01:31:54.161 "raid_level": "raid5f", 01:31:54.161 "superblock": true, 01:31:54.161 "num_base_bdevs": 4, 01:31:54.161 "num_base_bdevs_discovered": 4, 01:31:54.161 "num_base_bdevs_operational": 4, 01:31:54.161 "process": { 01:31:54.161 "type": "rebuild", 01:31:54.161 "target": "spare", 01:31:54.161 "progress": { 01:31:54.161 "blocks": 109440, 01:31:54.161 "percent": 57 01:31:54.161 } 01:31:54.161 }, 01:31:54.161 "base_bdevs_list": [ 01:31:54.161 { 01:31:54.161 "name": "spare", 01:31:54.161 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:54.161 "is_configured": true, 01:31:54.161 "data_offset": 2048, 01:31:54.161 "data_size": 63488 01:31:54.161 }, 01:31:54.161 { 01:31:54.161 "name": "BaseBdev2", 01:31:54.161 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:54.161 "is_configured": true, 01:31:54.161 "data_offset": 2048, 01:31:54.161 "data_size": 63488 01:31:54.161 }, 01:31:54.161 { 01:31:54.161 "name": "BaseBdev3", 01:31:54.161 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:54.161 "is_configured": true, 01:31:54.161 "data_offset": 2048, 01:31:54.161 "data_size": 63488 01:31:54.161 }, 01:31:54.161 { 01:31:54.161 "name": "BaseBdev4", 01:31:54.161 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:54.161 "is_configured": true, 01:31:54.161 "data_offset": 2048, 01:31:54.161 "data_size": 63488 01:31:54.161 } 01:31:54.161 ] 01:31:54.161 }' 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:54.161 05:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:55.537 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:55.537 "name": "raid_bdev1", 01:31:55.537 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:55.537 "strip_size_kb": 64, 01:31:55.537 "state": "online", 01:31:55.537 "raid_level": "raid5f", 01:31:55.537 "superblock": true, 01:31:55.537 "num_base_bdevs": 4, 01:31:55.537 "num_base_bdevs_discovered": 4, 01:31:55.537 "num_base_bdevs_operational": 4, 01:31:55.537 "process": { 01:31:55.537 "type": "rebuild", 01:31:55.537 "target": "spare", 01:31:55.537 "progress": { 01:31:55.537 "blocks": 132480, 01:31:55.537 "percent": 69 01:31:55.537 } 01:31:55.537 }, 01:31:55.537 "base_bdevs_list": [ 01:31:55.537 { 01:31:55.537 "name": "spare", 01:31:55.537 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:55.537 "is_configured": true, 01:31:55.537 "data_offset": 2048, 01:31:55.537 "data_size": 63488 01:31:55.537 }, 01:31:55.537 { 01:31:55.537 "name": "BaseBdev2", 01:31:55.537 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:55.537 "is_configured": true, 01:31:55.537 "data_offset": 2048, 01:31:55.537 "data_size": 63488 01:31:55.537 }, 01:31:55.537 { 01:31:55.537 "name": "BaseBdev3", 01:31:55.537 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:55.537 "is_configured": true, 01:31:55.537 "data_offset": 2048, 01:31:55.537 "data_size": 63488 01:31:55.537 }, 01:31:55.537 { 01:31:55.537 "name": "BaseBdev4", 01:31:55.537 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:55.537 "is_configured": true, 01:31:55.537 "data_offset": 2048, 01:31:55.537 "data_size": 63488 01:31:55.537 } 01:31:55.537 ] 01:31:55.538 }' 01:31:55.538 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:55.538 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:55.538 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:55.538 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:55.538 05:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:56.473 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:56.473 "name": "raid_bdev1", 01:31:56.473 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:56.473 "strip_size_kb": 64, 01:31:56.473 "state": "online", 01:31:56.473 "raid_level": "raid5f", 01:31:56.473 "superblock": true, 01:31:56.473 "num_base_bdevs": 4, 01:31:56.473 "num_base_bdevs_discovered": 4, 01:31:56.473 "num_base_bdevs_operational": 4, 01:31:56.473 "process": { 01:31:56.473 "type": "rebuild", 01:31:56.473 "target": "spare", 01:31:56.473 "progress": { 01:31:56.473 "blocks": 155520, 01:31:56.473 "percent": 81 01:31:56.473 } 01:31:56.473 }, 01:31:56.473 "base_bdevs_list": [ 01:31:56.473 { 01:31:56.473 "name": "spare", 01:31:56.473 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:56.473 "is_configured": true, 01:31:56.473 "data_offset": 2048, 01:31:56.473 "data_size": 63488 01:31:56.473 }, 01:31:56.473 { 01:31:56.473 "name": "BaseBdev2", 01:31:56.473 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:56.473 "is_configured": true, 01:31:56.473 "data_offset": 2048, 01:31:56.473 "data_size": 63488 01:31:56.473 }, 01:31:56.473 { 01:31:56.473 "name": "BaseBdev3", 01:31:56.473 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:56.473 "is_configured": true, 01:31:56.473 "data_offset": 2048, 01:31:56.473 "data_size": 63488 01:31:56.473 }, 01:31:56.473 { 01:31:56.473 "name": "BaseBdev4", 01:31:56.473 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:56.473 "is_configured": true, 01:31:56.473 "data_offset": 2048, 01:31:56.473 "data_size": 63488 01:31:56.473 } 01:31:56.473 ] 01:31:56.473 }' 01:31:56.474 05:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:56.474 05:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:56.474 05:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:56.474 05:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:56.474 05:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:57.849 "name": "raid_bdev1", 01:31:57.849 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:57.849 "strip_size_kb": 64, 01:31:57.849 "state": "online", 01:31:57.849 "raid_level": "raid5f", 01:31:57.849 "superblock": true, 01:31:57.849 "num_base_bdevs": 4, 01:31:57.849 "num_base_bdevs_discovered": 4, 01:31:57.849 "num_base_bdevs_operational": 4, 01:31:57.849 "process": { 01:31:57.849 "type": "rebuild", 01:31:57.849 "target": "spare", 01:31:57.849 "progress": { 01:31:57.849 "blocks": 176640, 01:31:57.849 "percent": 92 01:31:57.849 } 01:31:57.849 }, 01:31:57.849 "base_bdevs_list": [ 01:31:57.849 { 01:31:57.849 "name": "spare", 01:31:57.849 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:57.849 "is_configured": true, 01:31:57.849 "data_offset": 2048, 01:31:57.849 "data_size": 63488 01:31:57.849 }, 01:31:57.849 { 01:31:57.849 "name": "BaseBdev2", 01:31:57.849 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:57.849 "is_configured": true, 01:31:57.849 "data_offset": 2048, 01:31:57.849 "data_size": 63488 01:31:57.849 }, 01:31:57.849 { 01:31:57.849 "name": "BaseBdev3", 01:31:57.849 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:57.849 "is_configured": true, 01:31:57.849 "data_offset": 2048, 01:31:57.849 "data_size": 63488 01:31:57.849 }, 01:31:57.849 { 01:31:57.849 "name": "BaseBdev4", 01:31:57.849 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:57.849 "is_configured": true, 01:31:57.849 "data_offset": 2048, 01:31:57.849 "data_size": 63488 01:31:57.849 } 01:31:57.849 ] 01:31:57.849 }' 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:31:57.849 05:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 01:31:58.443 [2024-12-09 05:26:49.828485] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:31:58.443 [2024-12-09 05:26:49.828607] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:31:58.443 [2024-12-09 05:26:49.828817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:58.701 "name": "raid_bdev1", 01:31:58.701 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:58.701 "strip_size_kb": 64, 01:31:58.701 "state": "online", 01:31:58.701 "raid_level": "raid5f", 01:31:58.701 "superblock": true, 01:31:58.701 "num_base_bdevs": 4, 01:31:58.701 "num_base_bdevs_discovered": 4, 01:31:58.701 "num_base_bdevs_operational": 4, 01:31:58.701 "base_bdevs_list": [ 01:31:58.701 { 01:31:58.701 "name": "spare", 01:31:58.701 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:58.701 "is_configured": true, 01:31:58.701 "data_offset": 2048, 01:31:58.701 "data_size": 63488 01:31:58.701 }, 01:31:58.701 { 01:31:58.701 "name": "BaseBdev2", 01:31:58.701 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:58.701 "is_configured": true, 01:31:58.701 "data_offset": 2048, 01:31:58.701 "data_size": 63488 01:31:58.701 }, 01:31:58.701 { 01:31:58.701 "name": "BaseBdev3", 01:31:58.701 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:58.701 "is_configured": true, 01:31:58.701 "data_offset": 2048, 01:31:58.701 "data_size": 63488 01:31:58.701 }, 01:31:58.701 { 01:31:58.701 "name": "BaseBdev4", 01:31:58.701 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:58.701 "is_configured": true, 01:31:58.701 "data_offset": 2048, 01:31:58.701 "data_size": 63488 01:31:58.701 } 01:31:58.701 ] 01:31:58.701 }' 01:31:58.701 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:58.972 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:31:58.973 "name": "raid_bdev1", 01:31:58.973 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:58.973 "strip_size_kb": 64, 01:31:58.973 "state": "online", 01:31:58.973 "raid_level": "raid5f", 01:31:58.973 "superblock": true, 01:31:58.973 "num_base_bdevs": 4, 01:31:58.973 "num_base_bdevs_discovered": 4, 01:31:58.973 "num_base_bdevs_operational": 4, 01:31:58.973 "base_bdevs_list": [ 01:31:58.973 { 01:31:58.973 "name": "spare", 01:31:58.973 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:58.973 "is_configured": true, 01:31:58.973 "data_offset": 2048, 01:31:58.973 "data_size": 63488 01:31:58.973 }, 01:31:58.973 { 01:31:58.973 "name": "BaseBdev2", 01:31:58.973 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:58.973 "is_configured": true, 01:31:58.973 "data_offset": 2048, 01:31:58.973 "data_size": 63488 01:31:58.973 }, 01:31:58.973 { 01:31:58.973 "name": "BaseBdev3", 01:31:58.973 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:58.973 "is_configured": true, 01:31:58.973 "data_offset": 2048, 01:31:58.973 "data_size": 63488 01:31:58.973 }, 01:31:58.973 { 01:31:58.973 "name": "BaseBdev4", 01:31:58.973 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:58.973 "is_configured": true, 01:31:58.973 "data_offset": 2048, 01:31:58.973 "data_size": 63488 01:31:58.973 } 01:31:58.973 ] 01:31:58.973 }' 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:58.973 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:59.317 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:59.318 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:31:59.318 "name": "raid_bdev1", 01:31:59.318 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:31:59.318 "strip_size_kb": 64, 01:31:59.318 "state": "online", 01:31:59.318 "raid_level": "raid5f", 01:31:59.318 "superblock": true, 01:31:59.318 "num_base_bdevs": 4, 01:31:59.318 "num_base_bdevs_discovered": 4, 01:31:59.318 "num_base_bdevs_operational": 4, 01:31:59.318 "base_bdevs_list": [ 01:31:59.318 { 01:31:59.318 "name": "spare", 01:31:59.318 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:31:59.318 "is_configured": true, 01:31:59.318 "data_offset": 2048, 01:31:59.318 "data_size": 63488 01:31:59.318 }, 01:31:59.318 { 01:31:59.318 "name": "BaseBdev2", 01:31:59.318 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:31:59.318 "is_configured": true, 01:31:59.318 "data_offset": 2048, 01:31:59.318 "data_size": 63488 01:31:59.318 }, 01:31:59.318 { 01:31:59.318 "name": "BaseBdev3", 01:31:59.318 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:31:59.318 "is_configured": true, 01:31:59.318 "data_offset": 2048, 01:31:59.318 "data_size": 63488 01:31:59.318 }, 01:31:59.318 { 01:31:59.318 "name": "BaseBdev4", 01:31:59.318 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:31:59.318 "is_configured": true, 01:31:59.318 "data_offset": 2048, 01:31:59.318 "data_size": 63488 01:31:59.318 } 01:31:59.318 ] 01:31:59.318 }' 01:31:59.318 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:31:59.318 05:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:59.590 [2024-12-09 05:26:51.120139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:31:59.590 [2024-12-09 05:26:51.120197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:31:59.590 [2024-12-09 05:26:51.120291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:31:59.590 [2024-12-09 05:26:51.120445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:31:59.590 [2024-12-09 05:26:51.120492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:31:59.590 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:32:00.157 /dev/nbd0 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:32:00.157 1+0 records in 01:32:00.157 1+0 records out 01:32:00.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260956 s, 15.7 MB/s 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:32:00.157 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:32:00.416 /dev/nbd1 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:32:00.416 1+0 records in 01:32:00.416 1+0 records out 01:32:00.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421748 s, 9.7 MB/s 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:32:00.416 05:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:32:00.673 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:32:00.935 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:32:01.194 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:01.195 [2024-12-09 05:26:52.704258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:32:01.195 [2024-12-09 05:26:52.704322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:01.195 [2024-12-09 05:26:52.704375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 01:32:01.195 [2024-12-09 05:26:52.704394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:01.195 [2024-12-09 05:26:52.707443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:01.195 [2024-12-09 05:26:52.707489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:32:01.195 [2024-12-09 05:26:52.707608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:32:01.195 [2024-12-09 05:26:52.707677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:01.195 [2024-12-09 05:26:52.707852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:32:01.195 [2024-12-09 05:26:52.708009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 01:32:01.195 [2024-12-09 05:26:52.708153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 01:32:01.195 spare 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:01.195 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:01.195 [2024-12-09 05:26:52.808298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:32:01.195 [2024-12-09 05:26:52.808393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 01:32:01.195 [2024-12-09 05:26:52.808869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 01:32:01.453 [2024-12-09 05:26:52.815337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:32:01.453 [2024-12-09 05:26:52.815411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:32:01.453 [2024-12-09 05:26:52.815681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:01.453 "name": "raid_bdev1", 01:32:01.453 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:01.453 "strip_size_kb": 64, 01:32:01.453 "state": "online", 01:32:01.453 "raid_level": "raid5f", 01:32:01.453 "superblock": true, 01:32:01.453 "num_base_bdevs": 4, 01:32:01.453 "num_base_bdevs_discovered": 4, 01:32:01.453 "num_base_bdevs_operational": 4, 01:32:01.453 "base_bdevs_list": [ 01:32:01.453 { 01:32:01.453 "name": "spare", 01:32:01.453 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:32:01.453 "is_configured": true, 01:32:01.453 "data_offset": 2048, 01:32:01.453 "data_size": 63488 01:32:01.453 }, 01:32:01.453 { 01:32:01.453 "name": "BaseBdev2", 01:32:01.453 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:01.453 "is_configured": true, 01:32:01.453 "data_offset": 2048, 01:32:01.453 "data_size": 63488 01:32:01.453 }, 01:32:01.453 { 01:32:01.453 "name": "BaseBdev3", 01:32:01.453 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:01.453 "is_configured": true, 01:32:01.453 "data_offset": 2048, 01:32:01.453 "data_size": 63488 01:32:01.453 }, 01:32:01.453 { 01:32:01.453 "name": "BaseBdev4", 01:32:01.453 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:01.453 "is_configured": true, 01:32:01.453 "data_offset": 2048, 01:32:01.453 "data_size": 63488 01:32:01.453 } 01:32:01.453 ] 01:32:01.453 }' 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:01.453 05:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:02.019 "name": "raid_bdev1", 01:32:02.019 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:02.019 "strip_size_kb": 64, 01:32:02.019 "state": "online", 01:32:02.019 "raid_level": "raid5f", 01:32:02.019 "superblock": true, 01:32:02.019 "num_base_bdevs": 4, 01:32:02.019 "num_base_bdevs_discovered": 4, 01:32:02.019 "num_base_bdevs_operational": 4, 01:32:02.019 "base_bdevs_list": [ 01:32:02.019 { 01:32:02.019 "name": "spare", 01:32:02.019 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:32:02.019 "is_configured": true, 01:32:02.019 "data_offset": 2048, 01:32:02.019 "data_size": 63488 01:32:02.019 }, 01:32:02.019 { 01:32:02.019 "name": "BaseBdev2", 01:32:02.019 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:02.019 "is_configured": true, 01:32:02.019 "data_offset": 2048, 01:32:02.019 "data_size": 63488 01:32:02.019 }, 01:32:02.019 { 01:32:02.019 "name": "BaseBdev3", 01:32:02.019 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:02.019 "is_configured": true, 01:32:02.019 "data_offset": 2048, 01:32:02.019 "data_size": 63488 01:32:02.019 }, 01:32:02.019 { 01:32:02.019 "name": "BaseBdev4", 01:32:02.019 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:02.019 "is_configured": true, 01:32:02.019 "data_offset": 2048, 01:32:02.019 "data_size": 63488 01:32:02.019 } 01:32:02.019 ] 01:32:02.019 }' 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.019 [2024-12-09 05:26:53.567536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:02.019 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:02.019 "name": "raid_bdev1", 01:32:02.019 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:02.019 "strip_size_kb": 64, 01:32:02.019 "state": "online", 01:32:02.019 "raid_level": "raid5f", 01:32:02.019 "superblock": true, 01:32:02.019 "num_base_bdevs": 4, 01:32:02.019 "num_base_bdevs_discovered": 3, 01:32:02.020 "num_base_bdevs_operational": 3, 01:32:02.020 "base_bdevs_list": [ 01:32:02.020 { 01:32:02.020 "name": null, 01:32:02.020 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:02.020 "is_configured": false, 01:32:02.020 "data_offset": 0, 01:32:02.020 "data_size": 63488 01:32:02.020 }, 01:32:02.020 { 01:32:02.020 "name": "BaseBdev2", 01:32:02.020 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:02.020 "is_configured": true, 01:32:02.020 "data_offset": 2048, 01:32:02.020 "data_size": 63488 01:32:02.020 }, 01:32:02.020 { 01:32:02.020 "name": "BaseBdev3", 01:32:02.020 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:02.020 "is_configured": true, 01:32:02.020 "data_offset": 2048, 01:32:02.020 "data_size": 63488 01:32:02.020 }, 01:32:02.020 { 01:32:02.020 "name": "BaseBdev4", 01:32:02.020 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:02.020 "is_configured": true, 01:32:02.020 "data_offset": 2048, 01:32:02.020 "data_size": 63488 01:32:02.020 } 01:32:02.020 ] 01:32:02.020 }' 01:32:02.020 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:02.020 05:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.585 05:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:32:02.585 05:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:02.585 05:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:02.585 [2024-12-09 05:26:54.131818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:02.585 [2024-12-09 05:26:54.132132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:32:02.585 [2024-12-09 05:26:54.132178] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:32:02.585 [2024-12-09 05:26:54.132242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:02.585 [2024-12-09 05:26:54.146311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 01:32:02.585 05:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:02.585 05:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 01:32:02.585 [2024-12-09 05:26:54.155068] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:32:03.959 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:03.960 "name": "raid_bdev1", 01:32:03.960 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:03.960 "strip_size_kb": 64, 01:32:03.960 "state": "online", 01:32:03.960 "raid_level": "raid5f", 01:32:03.960 "superblock": true, 01:32:03.960 "num_base_bdevs": 4, 01:32:03.960 "num_base_bdevs_discovered": 4, 01:32:03.960 "num_base_bdevs_operational": 4, 01:32:03.960 "process": { 01:32:03.960 "type": "rebuild", 01:32:03.960 "target": "spare", 01:32:03.960 "progress": { 01:32:03.960 "blocks": 17280, 01:32:03.960 "percent": 9 01:32:03.960 } 01:32:03.960 }, 01:32:03.960 "base_bdevs_list": [ 01:32:03.960 { 01:32:03.960 "name": "spare", 01:32:03.960 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 }, 01:32:03.960 { 01:32:03.960 "name": "BaseBdev2", 01:32:03.960 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 }, 01:32:03.960 { 01:32:03.960 "name": "BaseBdev3", 01:32:03.960 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 }, 01:32:03.960 { 01:32:03.960 "name": "BaseBdev4", 01:32:03.960 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 } 01:32:03.960 ] 01:32:03.960 }' 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:03.960 [2024-12-09 05:26:55.320606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:03.960 [2024-12-09 05:26:55.367720] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:32:03.960 [2024-12-09 05:26:55.367894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:03.960 [2024-12-09 05:26:55.367921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:03.960 [2024-12-09 05:26:55.367940] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:03.960 "name": "raid_bdev1", 01:32:03.960 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:03.960 "strip_size_kb": 64, 01:32:03.960 "state": "online", 01:32:03.960 "raid_level": "raid5f", 01:32:03.960 "superblock": true, 01:32:03.960 "num_base_bdevs": 4, 01:32:03.960 "num_base_bdevs_discovered": 3, 01:32:03.960 "num_base_bdevs_operational": 3, 01:32:03.960 "base_bdevs_list": [ 01:32:03.960 { 01:32:03.960 "name": null, 01:32:03.960 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:03.960 "is_configured": false, 01:32:03.960 "data_offset": 0, 01:32:03.960 "data_size": 63488 01:32:03.960 }, 01:32:03.960 { 01:32:03.960 "name": "BaseBdev2", 01:32:03.960 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 }, 01:32:03.960 { 01:32:03.960 "name": "BaseBdev3", 01:32:03.960 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 }, 01:32:03.960 { 01:32:03.960 "name": "BaseBdev4", 01:32:03.960 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:03.960 "is_configured": true, 01:32:03.960 "data_offset": 2048, 01:32:03.960 "data_size": 63488 01:32:03.960 } 01:32:03.960 ] 01:32:03.960 }' 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:03.960 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:04.527 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:32:04.527 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:04.527 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:04.527 [2024-12-09 05:26:55.914466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:32:04.527 [2024-12-09 05:26:55.914551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:04.527 [2024-12-09 05:26:55.914588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 01:32:04.527 [2024-12-09 05:26:55.914608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:04.527 [2024-12-09 05:26:55.915292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:04.527 [2024-12-09 05:26:55.915398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:32:04.527 [2024-12-09 05:26:55.915493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:32:04.527 [2024-12-09 05:26:55.915524] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:32:04.527 [2024-12-09 05:26:55.915539] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:32:04.527 [2024-12-09 05:26:55.915588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:04.527 [2024-12-09 05:26:55.929769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 01:32:04.527 spare 01:32:04.527 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:04.527 05:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 01:32:04.527 [2024-12-09 05:26:55.938499] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:05.462 "name": "raid_bdev1", 01:32:05.462 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:05.462 "strip_size_kb": 64, 01:32:05.462 "state": "online", 01:32:05.462 "raid_level": "raid5f", 01:32:05.462 "superblock": true, 01:32:05.462 "num_base_bdevs": 4, 01:32:05.462 "num_base_bdevs_discovered": 4, 01:32:05.462 "num_base_bdevs_operational": 4, 01:32:05.462 "process": { 01:32:05.462 "type": "rebuild", 01:32:05.462 "target": "spare", 01:32:05.462 "progress": { 01:32:05.462 "blocks": 17280, 01:32:05.462 "percent": 9 01:32:05.462 } 01:32:05.462 }, 01:32:05.462 "base_bdevs_list": [ 01:32:05.462 { 01:32:05.462 "name": "spare", 01:32:05.462 "uuid": "0945aa32-b8da-5769-9d25-3f3f9f21399b", 01:32:05.462 "is_configured": true, 01:32:05.462 "data_offset": 2048, 01:32:05.462 "data_size": 63488 01:32:05.462 }, 01:32:05.462 { 01:32:05.462 "name": "BaseBdev2", 01:32:05.462 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:05.462 "is_configured": true, 01:32:05.462 "data_offset": 2048, 01:32:05.462 "data_size": 63488 01:32:05.462 }, 01:32:05.462 { 01:32:05.462 "name": "BaseBdev3", 01:32:05.462 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:05.462 "is_configured": true, 01:32:05.462 "data_offset": 2048, 01:32:05.462 "data_size": 63488 01:32:05.462 }, 01:32:05.462 { 01:32:05.462 "name": "BaseBdev4", 01:32:05.462 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:05.462 "is_configured": true, 01:32:05.462 "data_offset": 2048, 01:32:05.462 "data_size": 63488 01:32:05.462 } 01:32:05.462 ] 01:32:05.462 }' 01:32:05.462 05:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:05.462 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:05.462 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:05.720 [2024-12-09 05:26:57.096091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:05.720 [2024-12-09 05:26:57.151872] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:32:05.720 [2024-12-09 05:26:57.151989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:05.720 [2024-12-09 05:26:57.152033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:05.720 [2024-12-09 05:26:57.152045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:05.720 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:05.720 "name": "raid_bdev1", 01:32:05.720 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:05.720 "strip_size_kb": 64, 01:32:05.720 "state": "online", 01:32:05.720 "raid_level": "raid5f", 01:32:05.720 "superblock": true, 01:32:05.720 "num_base_bdevs": 4, 01:32:05.720 "num_base_bdevs_discovered": 3, 01:32:05.720 "num_base_bdevs_operational": 3, 01:32:05.720 "base_bdevs_list": [ 01:32:05.720 { 01:32:05.720 "name": null, 01:32:05.720 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:05.720 "is_configured": false, 01:32:05.720 "data_offset": 0, 01:32:05.720 "data_size": 63488 01:32:05.720 }, 01:32:05.720 { 01:32:05.720 "name": "BaseBdev2", 01:32:05.720 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:05.721 "is_configured": true, 01:32:05.721 "data_offset": 2048, 01:32:05.721 "data_size": 63488 01:32:05.721 }, 01:32:05.721 { 01:32:05.721 "name": "BaseBdev3", 01:32:05.721 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:05.721 "is_configured": true, 01:32:05.721 "data_offset": 2048, 01:32:05.721 "data_size": 63488 01:32:05.721 }, 01:32:05.721 { 01:32:05.721 "name": "BaseBdev4", 01:32:05.721 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:05.721 "is_configured": true, 01:32:05.721 "data_offset": 2048, 01:32:05.721 "data_size": 63488 01:32:05.721 } 01:32:05.721 ] 01:32:05.721 }' 01:32:05.721 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:05.721 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:06.284 "name": "raid_bdev1", 01:32:06.284 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:06.284 "strip_size_kb": 64, 01:32:06.284 "state": "online", 01:32:06.284 "raid_level": "raid5f", 01:32:06.284 "superblock": true, 01:32:06.284 "num_base_bdevs": 4, 01:32:06.284 "num_base_bdevs_discovered": 3, 01:32:06.284 "num_base_bdevs_operational": 3, 01:32:06.284 "base_bdevs_list": [ 01:32:06.284 { 01:32:06.284 "name": null, 01:32:06.284 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:06.284 "is_configured": false, 01:32:06.284 "data_offset": 0, 01:32:06.284 "data_size": 63488 01:32:06.284 }, 01:32:06.284 { 01:32:06.284 "name": "BaseBdev2", 01:32:06.284 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:06.284 "is_configured": true, 01:32:06.284 "data_offset": 2048, 01:32:06.284 "data_size": 63488 01:32:06.284 }, 01:32:06.284 { 01:32:06.284 "name": "BaseBdev3", 01:32:06.284 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:06.284 "is_configured": true, 01:32:06.284 "data_offset": 2048, 01:32:06.284 "data_size": 63488 01:32:06.284 }, 01:32:06.284 { 01:32:06.284 "name": "BaseBdev4", 01:32:06.284 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:06.284 "is_configured": true, 01:32:06.284 "data_offset": 2048, 01:32:06.284 "data_size": 63488 01:32:06.284 } 01:32:06.284 ] 01:32:06.284 }' 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:06.284 [2024-12-09 05:26:57.886913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:32:06.284 [2024-12-09 05:26:57.887021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:06.284 [2024-12-09 05:26:57.887052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 01:32:06.284 [2024-12-09 05:26:57.887065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:06.284 [2024-12-09 05:26:57.887752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:06.284 [2024-12-09 05:26:57.887802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:32:06.284 [2024-12-09 05:26:57.887905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:32:06.284 [2024-12-09 05:26:57.887927] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:32:06.284 [2024-12-09 05:26:57.887943] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:32:06.284 [2024-12-09 05:26:57.887960] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:32:06.284 BaseBdev1 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:06.284 05:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:07.683 "name": "raid_bdev1", 01:32:07.683 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:07.683 "strip_size_kb": 64, 01:32:07.683 "state": "online", 01:32:07.683 "raid_level": "raid5f", 01:32:07.683 "superblock": true, 01:32:07.683 "num_base_bdevs": 4, 01:32:07.683 "num_base_bdevs_discovered": 3, 01:32:07.683 "num_base_bdevs_operational": 3, 01:32:07.683 "base_bdevs_list": [ 01:32:07.683 { 01:32:07.683 "name": null, 01:32:07.683 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:07.683 "is_configured": false, 01:32:07.683 "data_offset": 0, 01:32:07.683 "data_size": 63488 01:32:07.683 }, 01:32:07.683 { 01:32:07.683 "name": "BaseBdev2", 01:32:07.683 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:07.683 "is_configured": true, 01:32:07.683 "data_offset": 2048, 01:32:07.683 "data_size": 63488 01:32:07.683 }, 01:32:07.683 { 01:32:07.683 "name": "BaseBdev3", 01:32:07.683 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:07.683 "is_configured": true, 01:32:07.683 "data_offset": 2048, 01:32:07.683 "data_size": 63488 01:32:07.683 }, 01:32:07.683 { 01:32:07.683 "name": "BaseBdev4", 01:32:07.683 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:07.683 "is_configured": true, 01:32:07.683 "data_offset": 2048, 01:32:07.683 "data_size": 63488 01:32:07.683 } 01:32:07.683 ] 01:32:07.683 }' 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:07.683 05:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:07.940 "name": "raid_bdev1", 01:32:07.940 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:07.940 "strip_size_kb": 64, 01:32:07.940 "state": "online", 01:32:07.940 "raid_level": "raid5f", 01:32:07.940 "superblock": true, 01:32:07.940 "num_base_bdevs": 4, 01:32:07.940 "num_base_bdevs_discovered": 3, 01:32:07.940 "num_base_bdevs_operational": 3, 01:32:07.940 "base_bdevs_list": [ 01:32:07.940 { 01:32:07.940 "name": null, 01:32:07.940 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:07.940 "is_configured": false, 01:32:07.940 "data_offset": 0, 01:32:07.940 "data_size": 63488 01:32:07.940 }, 01:32:07.940 { 01:32:07.940 "name": "BaseBdev2", 01:32:07.940 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:07.940 "is_configured": true, 01:32:07.940 "data_offset": 2048, 01:32:07.940 "data_size": 63488 01:32:07.940 }, 01:32:07.940 { 01:32:07.940 "name": "BaseBdev3", 01:32:07.940 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:07.940 "is_configured": true, 01:32:07.940 "data_offset": 2048, 01:32:07.940 "data_size": 63488 01:32:07.940 }, 01:32:07.940 { 01:32:07.940 "name": "BaseBdev4", 01:32:07.940 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:07.940 "is_configured": true, 01:32:07.940 "data_offset": 2048, 01:32:07.940 "data_size": 63488 01:32:07.940 } 01:32:07.940 ] 01:32:07.940 }' 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:07.940 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:08.197 [2024-12-09 05:26:59.591390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:08.197 [2024-12-09 05:26:59.591666] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:32:08.197 [2024-12-09 05:26:59.591692] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:32:08.197 request: 01:32:08.197 { 01:32:08.197 "base_bdev": "BaseBdev1", 01:32:08.197 "raid_bdev": "raid_bdev1", 01:32:08.197 "method": "bdev_raid_add_base_bdev", 01:32:08.197 "req_id": 1 01:32:08.197 } 01:32:08.197 Got JSON-RPC error response 01:32:08.197 response: 01:32:08.197 { 01:32:08.197 "code": -22, 01:32:08.197 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:32:08.197 } 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:32:08.197 05:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:09.127 "name": "raid_bdev1", 01:32:09.127 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:09.127 "strip_size_kb": 64, 01:32:09.127 "state": "online", 01:32:09.127 "raid_level": "raid5f", 01:32:09.127 "superblock": true, 01:32:09.127 "num_base_bdevs": 4, 01:32:09.127 "num_base_bdevs_discovered": 3, 01:32:09.127 "num_base_bdevs_operational": 3, 01:32:09.127 "base_bdevs_list": [ 01:32:09.127 { 01:32:09.127 "name": null, 01:32:09.127 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:09.127 "is_configured": false, 01:32:09.127 "data_offset": 0, 01:32:09.127 "data_size": 63488 01:32:09.127 }, 01:32:09.127 { 01:32:09.127 "name": "BaseBdev2", 01:32:09.127 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:09.127 "is_configured": true, 01:32:09.127 "data_offset": 2048, 01:32:09.127 "data_size": 63488 01:32:09.127 }, 01:32:09.127 { 01:32:09.127 "name": "BaseBdev3", 01:32:09.127 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:09.127 "is_configured": true, 01:32:09.127 "data_offset": 2048, 01:32:09.127 "data_size": 63488 01:32:09.127 }, 01:32:09.127 { 01:32:09.127 "name": "BaseBdev4", 01:32:09.127 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:09.127 "is_configured": true, 01:32:09.127 "data_offset": 2048, 01:32:09.127 "data_size": 63488 01:32:09.127 } 01:32:09.127 ] 01:32:09.127 }' 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:09.127 05:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:09.691 "name": "raid_bdev1", 01:32:09.691 "uuid": "49ae7a20-4977-4898-8491-6c0dab3b5bf7", 01:32:09.691 "strip_size_kb": 64, 01:32:09.691 "state": "online", 01:32:09.691 "raid_level": "raid5f", 01:32:09.691 "superblock": true, 01:32:09.691 "num_base_bdevs": 4, 01:32:09.691 "num_base_bdevs_discovered": 3, 01:32:09.691 "num_base_bdevs_operational": 3, 01:32:09.691 "base_bdevs_list": [ 01:32:09.691 { 01:32:09.691 "name": null, 01:32:09.691 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:09.691 "is_configured": false, 01:32:09.691 "data_offset": 0, 01:32:09.691 "data_size": 63488 01:32:09.691 }, 01:32:09.691 { 01:32:09.691 "name": "BaseBdev2", 01:32:09.691 "uuid": "6beaad86-04b8-517c-9832-da0345f859c5", 01:32:09.691 "is_configured": true, 01:32:09.691 "data_offset": 2048, 01:32:09.691 "data_size": 63488 01:32:09.691 }, 01:32:09.691 { 01:32:09.691 "name": "BaseBdev3", 01:32:09.691 "uuid": "42499631-a9bc-56fc-80b5-b91d0f9da843", 01:32:09.691 "is_configured": true, 01:32:09.691 "data_offset": 2048, 01:32:09.691 "data_size": 63488 01:32:09.691 }, 01:32:09.691 { 01:32:09.691 "name": "BaseBdev4", 01:32:09.691 "uuid": "06225e10-ba4a-5416-ad31-40a85621f269", 01:32:09.691 "is_configured": true, 01:32:09.691 "data_offset": 2048, 01:32:09.691 "data_size": 63488 01:32:09.691 } 01:32:09.691 ] 01:32:09.691 }' 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85464 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85464 ']' 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85464 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 01:32:09.691 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:09.692 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85464 01:32:09.949 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:09.949 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:09.949 killing process with pid 85464 01:32:09.949 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85464' 01:32:09.949 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85464 01:32:09.949 Received shutdown signal, test time was about 60.000000 seconds 01:32:09.949 01:32:09.949 Latency(us) 01:32:09.949 [2024-12-09T05:27:01.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:09.949 [2024-12-09T05:27:01.566Z] =================================================================================================================== 01:32:09.949 [2024-12-09T05:27:01.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:32:09.949 [2024-12-09 05:27:01.318815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:32:09.949 05:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85464 01:32:09.949 [2024-12-09 05:27:01.318985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:09.949 [2024-12-09 05:27:01.319087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:09.949 [2024-12-09 05:27:01.319108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:32:10.207 [2024-12-09 05:27:01.705131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:32:11.582 05:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 01:32:11.582 01:32:11.582 real 0m28.818s 01:32:11.582 user 0m37.575s 01:32:11.582 sys 0m3.010s 01:32:11.582 05:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:11.582 05:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 01:32:11.582 ************************************ 01:32:11.582 END TEST raid5f_rebuild_test_sb 01:32:11.582 ************************************ 01:32:11.582 05:27:02 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 01:32:11.582 05:27:02 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 01:32:11.582 05:27:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:32:11.582 05:27:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:11.582 05:27:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:32:11.582 ************************************ 01:32:11.582 START TEST raid_state_function_test_sb_4k 01:32:11.582 ************************************ 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86288 01:32:11.582 Process raid pid: 86288 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86288' 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86288 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86288 ']' 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:11.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:11.582 05:27:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:11.582 [2024-12-09 05:27:02.945462] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:11.582 [2024-12-09 05:27:02.946655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:32:11.582 [2024-12-09 05:27:03.163881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:11.841 [2024-12-09 05:27:03.285550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:12.099 [2024-12-09 05:27:03.485733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:12.099 [2024-12-09 05:27:03.485786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:12.670 [2024-12-09 05:27:03.983658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:32:12.670 [2024-12-09 05:27:03.983744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:32:12.670 [2024-12-09 05:27:03.983760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:32:12.670 [2024-12-09 05:27:03.983775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:12.670 05:27:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:12.670 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:12.670 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:12.670 "name": "Existed_Raid", 01:32:12.670 "uuid": "dc4d59cc-308b-4a2a-921b-6d9aeac574f1", 01:32:12.670 "strip_size_kb": 0, 01:32:12.670 "state": "configuring", 01:32:12.670 "raid_level": "raid1", 01:32:12.670 "superblock": true, 01:32:12.670 "num_base_bdevs": 2, 01:32:12.670 "num_base_bdevs_discovered": 0, 01:32:12.670 "num_base_bdevs_operational": 2, 01:32:12.670 "base_bdevs_list": [ 01:32:12.670 { 01:32:12.670 "name": "BaseBdev1", 01:32:12.670 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:12.670 "is_configured": false, 01:32:12.670 "data_offset": 0, 01:32:12.670 "data_size": 0 01:32:12.670 }, 01:32:12.670 { 01:32:12.670 "name": "BaseBdev2", 01:32:12.670 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:12.670 "is_configured": false, 01:32:12.670 "data_offset": 0, 01:32:12.670 "data_size": 0 01:32:12.670 } 01:32:12.670 ] 01:32:12.670 }' 01:32:12.670 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:12.670 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:12.929 [2024-12-09 05:27:04.484181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:32:12.929 [2024-12-09 05:27:04.484386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:12.929 [2024-12-09 05:27:04.492178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:32:12.929 [2024-12-09 05:27:04.492430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:32:12.929 [2024-12-09 05:27:04.492553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:32:12.929 [2024-12-09 05:27:04.492711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:12.929 [2024-12-09 05:27:04.538522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:12.929 BaseBdev1 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:12.929 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.187 [ 01:32:13.187 { 01:32:13.187 "name": "BaseBdev1", 01:32:13.187 "aliases": [ 01:32:13.187 "ff52d09d-f352-46be-82a7-316b1f6d5ba1" 01:32:13.187 ], 01:32:13.187 "product_name": "Malloc disk", 01:32:13.187 "block_size": 4096, 01:32:13.187 "num_blocks": 8192, 01:32:13.187 "uuid": "ff52d09d-f352-46be-82a7-316b1f6d5ba1", 01:32:13.187 "assigned_rate_limits": { 01:32:13.187 "rw_ios_per_sec": 0, 01:32:13.187 "rw_mbytes_per_sec": 0, 01:32:13.187 "r_mbytes_per_sec": 0, 01:32:13.187 "w_mbytes_per_sec": 0 01:32:13.187 }, 01:32:13.187 "claimed": true, 01:32:13.187 "claim_type": "exclusive_write", 01:32:13.187 "zoned": false, 01:32:13.187 "supported_io_types": { 01:32:13.187 "read": true, 01:32:13.187 "write": true, 01:32:13.187 "unmap": true, 01:32:13.187 "flush": true, 01:32:13.187 "reset": true, 01:32:13.187 "nvme_admin": false, 01:32:13.187 "nvme_io": false, 01:32:13.187 "nvme_io_md": false, 01:32:13.187 "write_zeroes": true, 01:32:13.187 "zcopy": true, 01:32:13.187 "get_zone_info": false, 01:32:13.187 "zone_management": false, 01:32:13.187 "zone_append": false, 01:32:13.187 "compare": false, 01:32:13.187 "compare_and_write": false, 01:32:13.187 "abort": true, 01:32:13.187 "seek_hole": false, 01:32:13.187 "seek_data": false, 01:32:13.187 "copy": true, 01:32:13.187 "nvme_iov_md": false 01:32:13.187 }, 01:32:13.187 "memory_domains": [ 01:32:13.187 { 01:32:13.187 "dma_device_id": "system", 01:32:13.187 "dma_device_type": 1 01:32:13.187 }, 01:32:13.187 { 01:32:13.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:13.187 "dma_device_type": 2 01:32:13.187 } 01:32:13.187 ], 01:32:13.187 "driver_specific": {} 01:32:13.187 } 01:32:13.187 ] 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:13.187 "name": "Existed_Raid", 01:32:13.187 "uuid": "4f0bfb0b-d0ac-45f8-9044-a34c7d2ecdef", 01:32:13.187 "strip_size_kb": 0, 01:32:13.187 "state": "configuring", 01:32:13.187 "raid_level": "raid1", 01:32:13.187 "superblock": true, 01:32:13.187 "num_base_bdevs": 2, 01:32:13.187 "num_base_bdevs_discovered": 1, 01:32:13.187 "num_base_bdevs_operational": 2, 01:32:13.187 "base_bdevs_list": [ 01:32:13.187 { 01:32:13.187 "name": "BaseBdev1", 01:32:13.187 "uuid": "ff52d09d-f352-46be-82a7-316b1f6d5ba1", 01:32:13.187 "is_configured": true, 01:32:13.187 "data_offset": 256, 01:32:13.187 "data_size": 7936 01:32:13.187 }, 01:32:13.187 { 01:32:13.187 "name": "BaseBdev2", 01:32:13.187 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:13.187 "is_configured": false, 01:32:13.187 "data_offset": 0, 01:32:13.187 "data_size": 0 01:32:13.187 } 01:32:13.187 ] 01:32:13.187 }' 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:13.187 05:27:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.755 [2024-12-09 05:27:05.084578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:32:13.755 [2024-12-09 05:27:05.084680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.755 [2024-12-09 05:27:05.092597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:13.755 [2024-12-09 05:27:05.095330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:32:13.755 [2024-12-09 05:27:05.095556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:13.755 "name": "Existed_Raid", 01:32:13.755 "uuid": "786b577b-d57b-4fc8-a955-d760a9b57f24", 01:32:13.755 "strip_size_kb": 0, 01:32:13.755 "state": "configuring", 01:32:13.755 "raid_level": "raid1", 01:32:13.755 "superblock": true, 01:32:13.755 "num_base_bdevs": 2, 01:32:13.755 "num_base_bdevs_discovered": 1, 01:32:13.755 "num_base_bdevs_operational": 2, 01:32:13.755 "base_bdevs_list": [ 01:32:13.755 { 01:32:13.755 "name": "BaseBdev1", 01:32:13.755 "uuid": "ff52d09d-f352-46be-82a7-316b1f6d5ba1", 01:32:13.755 "is_configured": true, 01:32:13.755 "data_offset": 256, 01:32:13.755 "data_size": 7936 01:32:13.755 }, 01:32:13.755 { 01:32:13.755 "name": "BaseBdev2", 01:32:13.755 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:13.755 "is_configured": false, 01:32:13.755 "data_offset": 0, 01:32:13.755 "data_size": 0 01:32:13.755 } 01:32:13.755 ] 01:32:13.755 }' 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:13.755 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.013 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 01:32:14.013 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.013 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.272 [2024-12-09 05:27:05.646653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:32:14.272 [2024-12-09 05:27:05.647070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:32:14.272 [2024-12-09 05:27:05.647090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:14.272 BaseBdev2 01:32:14.272 [2024-12-09 05:27:05.647474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:32:14.272 [2024-12-09 05:27:05.647861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:32:14.272 [2024-12-09 05:27:05.647895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:14.272 [2024-12-09 05:27:05.648216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.272 [ 01:32:14.272 { 01:32:14.272 "name": "BaseBdev2", 01:32:14.272 "aliases": [ 01:32:14.272 "a53bdef7-3932-4312-8a3e-d584edd2aa09" 01:32:14.272 ], 01:32:14.272 "product_name": "Malloc disk", 01:32:14.272 "block_size": 4096, 01:32:14.272 "num_blocks": 8192, 01:32:14.272 "uuid": "a53bdef7-3932-4312-8a3e-d584edd2aa09", 01:32:14.272 "assigned_rate_limits": { 01:32:14.272 "rw_ios_per_sec": 0, 01:32:14.272 "rw_mbytes_per_sec": 0, 01:32:14.272 "r_mbytes_per_sec": 0, 01:32:14.272 "w_mbytes_per_sec": 0 01:32:14.272 }, 01:32:14.272 "claimed": true, 01:32:14.272 "claim_type": "exclusive_write", 01:32:14.272 "zoned": false, 01:32:14.272 "supported_io_types": { 01:32:14.272 "read": true, 01:32:14.272 "write": true, 01:32:14.272 "unmap": true, 01:32:14.272 "flush": true, 01:32:14.272 "reset": true, 01:32:14.272 "nvme_admin": false, 01:32:14.272 "nvme_io": false, 01:32:14.272 "nvme_io_md": false, 01:32:14.272 "write_zeroes": true, 01:32:14.272 "zcopy": true, 01:32:14.272 "get_zone_info": false, 01:32:14.272 "zone_management": false, 01:32:14.272 "zone_append": false, 01:32:14.272 "compare": false, 01:32:14.272 "compare_and_write": false, 01:32:14.272 "abort": true, 01:32:14.272 "seek_hole": false, 01:32:14.272 "seek_data": false, 01:32:14.272 "copy": true, 01:32:14.272 "nvme_iov_md": false 01:32:14.272 }, 01:32:14.272 "memory_domains": [ 01:32:14.272 { 01:32:14.272 "dma_device_id": "system", 01:32:14.272 "dma_device_type": 1 01:32:14.272 }, 01:32:14.272 { 01:32:14.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:14.272 "dma_device_type": 2 01:32:14.272 } 01:32:14.272 ], 01:32:14.272 "driver_specific": {} 01:32:14.272 } 01:32:14.272 ] 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:14.272 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:14.272 "name": "Existed_Raid", 01:32:14.272 "uuid": "786b577b-d57b-4fc8-a955-d760a9b57f24", 01:32:14.272 "strip_size_kb": 0, 01:32:14.272 "state": "online", 01:32:14.272 "raid_level": "raid1", 01:32:14.272 "superblock": true, 01:32:14.273 "num_base_bdevs": 2, 01:32:14.273 "num_base_bdevs_discovered": 2, 01:32:14.273 "num_base_bdevs_operational": 2, 01:32:14.273 "base_bdevs_list": [ 01:32:14.273 { 01:32:14.273 "name": "BaseBdev1", 01:32:14.273 "uuid": "ff52d09d-f352-46be-82a7-316b1f6d5ba1", 01:32:14.273 "is_configured": true, 01:32:14.273 "data_offset": 256, 01:32:14.273 "data_size": 7936 01:32:14.273 }, 01:32:14.273 { 01:32:14.273 "name": "BaseBdev2", 01:32:14.273 "uuid": "a53bdef7-3932-4312-8a3e-d584edd2aa09", 01:32:14.273 "is_configured": true, 01:32:14.273 "data_offset": 256, 01:32:14.273 "data_size": 7936 01:32:14.273 } 01:32:14.273 ] 01:32:14.273 }' 01:32:14.273 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:14.273 05:27:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.841 [2024-12-09 05:27:06.203121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:14.841 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:32:14.841 "name": "Existed_Raid", 01:32:14.841 "aliases": [ 01:32:14.841 "786b577b-d57b-4fc8-a955-d760a9b57f24" 01:32:14.841 ], 01:32:14.841 "product_name": "Raid Volume", 01:32:14.841 "block_size": 4096, 01:32:14.841 "num_blocks": 7936, 01:32:14.841 "uuid": "786b577b-d57b-4fc8-a955-d760a9b57f24", 01:32:14.841 "assigned_rate_limits": { 01:32:14.841 "rw_ios_per_sec": 0, 01:32:14.841 "rw_mbytes_per_sec": 0, 01:32:14.841 "r_mbytes_per_sec": 0, 01:32:14.841 "w_mbytes_per_sec": 0 01:32:14.841 }, 01:32:14.841 "claimed": false, 01:32:14.841 "zoned": false, 01:32:14.841 "supported_io_types": { 01:32:14.841 "read": true, 01:32:14.841 "write": true, 01:32:14.841 "unmap": false, 01:32:14.841 "flush": false, 01:32:14.841 "reset": true, 01:32:14.841 "nvme_admin": false, 01:32:14.841 "nvme_io": false, 01:32:14.841 "nvme_io_md": false, 01:32:14.841 "write_zeroes": true, 01:32:14.841 "zcopy": false, 01:32:14.841 "get_zone_info": false, 01:32:14.841 "zone_management": false, 01:32:14.841 "zone_append": false, 01:32:14.841 "compare": false, 01:32:14.841 "compare_and_write": false, 01:32:14.841 "abort": false, 01:32:14.841 "seek_hole": false, 01:32:14.841 "seek_data": false, 01:32:14.841 "copy": false, 01:32:14.841 "nvme_iov_md": false 01:32:14.841 }, 01:32:14.841 "memory_domains": [ 01:32:14.841 { 01:32:14.841 "dma_device_id": "system", 01:32:14.841 "dma_device_type": 1 01:32:14.841 }, 01:32:14.841 { 01:32:14.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:14.841 "dma_device_type": 2 01:32:14.841 }, 01:32:14.841 { 01:32:14.841 "dma_device_id": "system", 01:32:14.841 "dma_device_type": 1 01:32:14.841 }, 01:32:14.841 { 01:32:14.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:14.841 "dma_device_type": 2 01:32:14.841 } 01:32:14.841 ], 01:32:14.841 "driver_specific": { 01:32:14.841 "raid": { 01:32:14.842 "uuid": "786b577b-d57b-4fc8-a955-d760a9b57f24", 01:32:14.842 "strip_size_kb": 0, 01:32:14.842 "state": "online", 01:32:14.842 "raid_level": "raid1", 01:32:14.842 "superblock": true, 01:32:14.842 "num_base_bdevs": 2, 01:32:14.842 "num_base_bdevs_discovered": 2, 01:32:14.842 "num_base_bdevs_operational": 2, 01:32:14.842 "base_bdevs_list": [ 01:32:14.842 { 01:32:14.842 "name": "BaseBdev1", 01:32:14.842 "uuid": "ff52d09d-f352-46be-82a7-316b1f6d5ba1", 01:32:14.842 "is_configured": true, 01:32:14.842 "data_offset": 256, 01:32:14.842 "data_size": 7936 01:32:14.842 }, 01:32:14.842 { 01:32:14.842 "name": "BaseBdev2", 01:32:14.842 "uuid": "a53bdef7-3932-4312-8a3e-d584edd2aa09", 01:32:14.842 "is_configured": true, 01:32:14.842 "data_offset": 256, 01:32:14.842 "data_size": 7936 01:32:14.842 } 01:32:14.842 ] 01:32:14.842 } 01:32:14.842 } 01:32:14.842 }' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:32:14.842 BaseBdev2' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:14.842 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:15.101 [2024-12-09 05:27:06.474989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:15.101 "name": "Existed_Raid", 01:32:15.101 "uuid": "786b577b-d57b-4fc8-a955-d760a9b57f24", 01:32:15.101 "strip_size_kb": 0, 01:32:15.101 "state": "online", 01:32:15.101 "raid_level": "raid1", 01:32:15.101 "superblock": true, 01:32:15.101 "num_base_bdevs": 2, 01:32:15.101 "num_base_bdevs_discovered": 1, 01:32:15.101 "num_base_bdevs_operational": 1, 01:32:15.101 "base_bdevs_list": [ 01:32:15.101 { 01:32:15.101 "name": null, 01:32:15.101 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:15.101 "is_configured": false, 01:32:15.101 "data_offset": 0, 01:32:15.101 "data_size": 7936 01:32:15.101 }, 01:32:15.101 { 01:32:15.101 "name": "BaseBdev2", 01:32:15.101 "uuid": "a53bdef7-3932-4312-8a3e-d584edd2aa09", 01:32:15.101 "is_configured": true, 01:32:15.101 "data_offset": 256, 01:32:15.101 "data_size": 7936 01:32:15.101 } 01:32:15.101 ] 01:32:15.101 }' 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:15.101 05:27:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:15.669 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:32:15.669 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:15.670 [2024-12-09 05:27:07.147177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:32:15.670 [2024-12-09 05:27:07.147299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:15.670 [2024-12-09 05:27:07.217789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:15.670 [2024-12-09 05:27:07.218098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:15.670 [2024-12-09 05:27:07.218147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86288 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86288 ']' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86288 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:15.670 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86288 01:32:15.928 killing process with pid 86288 01:32:15.928 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:15.928 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:15.928 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86288' 01:32:15.928 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86288 01:32:15.928 [2024-12-09 05:27:07.310530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:32:15.928 05:27:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86288 01:32:15.928 [2024-12-09 05:27:07.322881] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:32:16.861 05:27:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 01:32:16.861 01:32:16.861 real 0m5.549s 01:32:16.861 user 0m8.330s 01:32:16.861 sys 0m0.879s 01:32:16.861 05:27:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:16.861 05:27:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:16.861 ************************************ 01:32:16.861 END TEST raid_state_function_test_sb_4k 01:32:16.861 ************************************ 01:32:16.861 05:27:08 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 01:32:16.861 05:27:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:16.861 05:27:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:16.861 05:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:32:16.861 ************************************ 01:32:16.861 START TEST raid_superblock_test_4k 01:32:16.861 ************************************ 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86542 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86542 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86542 ']' 01:32:16.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:16.861 05:27:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:17.118 [2024-12-09 05:27:08.509966] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:17.118 [2024-12-09 05:27:08.510736] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86542 ] 01:32:17.118 [2024-12-09 05:27:08.678670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:17.376 [2024-12-09 05:27:08.815434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:17.633 [2024-12-09 05:27:09.005962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:17.633 [2024-12-09 05:27:09.006010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.199 malloc1 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.199 [2024-12-09 05:27:09.572585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:32:18.199 [2024-12-09 05:27:09.573188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:18.199 [2024-12-09 05:27:09.573324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:32:18.199 [2024-12-09 05:27:09.573446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:18.199 [2024-12-09 05:27:09.576466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:18.199 [2024-12-09 05:27:09.576734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:32:18.199 pt1 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.199 malloc2 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.199 [2024-12-09 05:27:09.620987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:32:18.199 [2024-12-09 05:27:09.621293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:18.199 [2024-12-09 05:27:09.621493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:32:18.199 [2024-12-09 05:27:09.621608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:18.199 [2024-12-09 05:27:09.624868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:18.199 [2024-12-09 05:27:09.624990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:32:18.199 pt2 01:32:18.199 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.200 [2024-12-09 05:27:09.629302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:32:18.200 [2024-12-09 05:27:09.632012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:18.200 [2024-12-09 05:27:09.632289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:32:18.200 [2024-12-09 05:27:09.632312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:18.200 [2024-12-09 05:27:09.632719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:32:18.200 [2024-12-09 05:27:09.633078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:32:18.200 [2024-12-09 05:27:09.633110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:32:18.200 [2024-12-09 05:27:09.633403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:18.200 "name": "raid_bdev1", 01:32:18.200 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:18.200 "strip_size_kb": 0, 01:32:18.200 "state": "online", 01:32:18.200 "raid_level": "raid1", 01:32:18.200 "superblock": true, 01:32:18.200 "num_base_bdevs": 2, 01:32:18.200 "num_base_bdevs_discovered": 2, 01:32:18.200 "num_base_bdevs_operational": 2, 01:32:18.200 "base_bdevs_list": [ 01:32:18.200 { 01:32:18.200 "name": "pt1", 01:32:18.200 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:18.200 "is_configured": true, 01:32:18.200 "data_offset": 256, 01:32:18.200 "data_size": 7936 01:32:18.200 }, 01:32:18.200 { 01:32:18.200 "name": "pt2", 01:32:18.200 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:18.200 "is_configured": true, 01:32:18.200 "data_offset": 256, 01:32:18.200 "data_size": 7936 01:32:18.200 } 01:32:18.200 ] 01:32:18.200 }' 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:18.200 05:27:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.766 [2024-12-09 05:27:10.133871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.766 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:32:18.766 "name": "raid_bdev1", 01:32:18.766 "aliases": [ 01:32:18.766 "fb838a3d-098a-4a4e-ab3f-9450f9cff12e" 01:32:18.767 ], 01:32:18.767 "product_name": "Raid Volume", 01:32:18.767 "block_size": 4096, 01:32:18.767 "num_blocks": 7936, 01:32:18.767 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:18.767 "assigned_rate_limits": { 01:32:18.767 "rw_ios_per_sec": 0, 01:32:18.767 "rw_mbytes_per_sec": 0, 01:32:18.767 "r_mbytes_per_sec": 0, 01:32:18.767 "w_mbytes_per_sec": 0 01:32:18.767 }, 01:32:18.767 "claimed": false, 01:32:18.767 "zoned": false, 01:32:18.767 "supported_io_types": { 01:32:18.767 "read": true, 01:32:18.767 "write": true, 01:32:18.767 "unmap": false, 01:32:18.767 "flush": false, 01:32:18.767 "reset": true, 01:32:18.767 "nvme_admin": false, 01:32:18.767 "nvme_io": false, 01:32:18.767 "nvme_io_md": false, 01:32:18.767 "write_zeroes": true, 01:32:18.767 "zcopy": false, 01:32:18.767 "get_zone_info": false, 01:32:18.767 "zone_management": false, 01:32:18.767 "zone_append": false, 01:32:18.767 "compare": false, 01:32:18.767 "compare_and_write": false, 01:32:18.767 "abort": false, 01:32:18.767 "seek_hole": false, 01:32:18.767 "seek_data": false, 01:32:18.767 "copy": false, 01:32:18.767 "nvme_iov_md": false 01:32:18.767 }, 01:32:18.767 "memory_domains": [ 01:32:18.767 { 01:32:18.767 "dma_device_id": "system", 01:32:18.767 "dma_device_type": 1 01:32:18.767 }, 01:32:18.767 { 01:32:18.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:18.767 "dma_device_type": 2 01:32:18.767 }, 01:32:18.767 { 01:32:18.767 "dma_device_id": "system", 01:32:18.767 "dma_device_type": 1 01:32:18.767 }, 01:32:18.767 { 01:32:18.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:18.767 "dma_device_type": 2 01:32:18.767 } 01:32:18.767 ], 01:32:18.767 "driver_specific": { 01:32:18.767 "raid": { 01:32:18.767 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:18.767 "strip_size_kb": 0, 01:32:18.767 "state": "online", 01:32:18.767 "raid_level": "raid1", 01:32:18.767 "superblock": true, 01:32:18.767 "num_base_bdevs": 2, 01:32:18.767 "num_base_bdevs_discovered": 2, 01:32:18.767 "num_base_bdevs_operational": 2, 01:32:18.767 "base_bdevs_list": [ 01:32:18.767 { 01:32:18.767 "name": "pt1", 01:32:18.767 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:18.767 "is_configured": true, 01:32:18.767 "data_offset": 256, 01:32:18.767 "data_size": 7936 01:32:18.767 }, 01:32:18.767 { 01:32:18.767 "name": "pt2", 01:32:18.767 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:18.767 "is_configured": true, 01:32:18.767 "data_offset": 256, 01:32:18.767 "data_size": 7936 01:32:18.767 } 01:32:18.767 ] 01:32:18.767 } 01:32:18.767 } 01:32:18.767 }' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:32:18.767 pt2' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:18.767 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.024 [2024-12-09 05:27:10.397905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:19.024 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb838a3d-098a-4a4e-ab3f-9450f9cff12e 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z fb838a3d-098a-4a4e-ab3f-9450f9cff12e ']' 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 [2024-12-09 05:27:10.449521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:19.025 [2024-12-09 05:27:10.449678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:19.025 [2024-12-09 05:27:10.449823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:19.025 [2024-12-09 05:27:10.449902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:19.025 [2024-12-09 05:27:10.449921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 [2024-12-09 05:27:10.581601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:32:19.025 [2024-12-09 05:27:10.584313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:32:19.025 [2024-12-09 05:27:10.584429] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:32:19.025 [2024-12-09 05:27:10.584884] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:32:19.025 [2024-12-09 05:27:10.584938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:19.025 [2024-12-09 05:27:10.584956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:32:19.025 request: 01:32:19.025 { 01:32:19.025 "name": "raid_bdev1", 01:32:19.025 "raid_level": "raid1", 01:32:19.025 "base_bdevs": [ 01:32:19.025 "malloc1", 01:32:19.025 "malloc2" 01:32:19.025 ], 01:32:19.025 "superblock": false, 01:32:19.025 "method": "bdev_raid_create", 01:32:19.025 "req_id": 1 01:32:19.025 } 01:32:19.025 Got JSON-RPC error response 01:32:19.025 response: 01:32:19.025 { 01:32:19.025 "code": -17, 01:32:19.025 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:32:19.025 } 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:32:19.025 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.282 [2024-12-09 05:27:10.649857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:32:19.282 [2024-12-09 05:27:10.650132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:19.282 [2024-12-09 05:27:10.650457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:32:19.282 [2024-12-09 05:27:10.650712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:19.282 [2024-12-09 05:27:10.654146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:19.282 [2024-12-09 05:27:10.654460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:32:19.282 [2024-12-09 05:27:10.654798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:32:19.282 [2024-12-09 05:27:10.655024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:32:19.282 pt1 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:19.282 "name": "raid_bdev1", 01:32:19.282 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:19.282 "strip_size_kb": 0, 01:32:19.282 "state": "configuring", 01:32:19.282 "raid_level": "raid1", 01:32:19.282 "superblock": true, 01:32:19.282 "num_base_bdevs": 2, 01:32:19.282 "num_base_bdevs_discovered": 1, 01:32:19.282 "num_base_bdevs_operational": 2, 01:32:19.282 "base_bdevs_list": [ 01:32:19.282 { 01:32:19.282 "name": "pt1", 01:32:19.282 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:19.282 "is_configured": true, 01:32:19.282 "data_offset": 256, 01:32:19.282 "data_size": 7936 01:32:19.282 }, 01:32:19.282 { 01:32:19.282 "name": null, 01:32:19.282 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:19.282 "is_configured": false, 01:32:19.282 "data_offset": 256, 01:32:19.282 "data_size": 7936 01:32:19.282 } 01:32:19.282 ] 01:32:19.282 }' 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:19.282 05:27:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.848 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 01:32:19.848 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:32:19.848 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:32:19.848 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:32:19.848 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.848 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.848 [2024-12-09 05:27:11.163119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:32:19.848 [2024-12-09 05:27:11.163488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:19.848 [2024-12-09 05:27:11.163532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:32:19.848 [2024-12-09 05:27:11.163552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:19.848 [2024-12-09 05:27:11.164184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:19.848 [2024-12-09 05:27:11.164212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:32:19.848 [2024-12-09 05:27:11.164300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:32:19.848 [2024-12-09 05:27:11.164335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:19.848 [2024-12-09 05:27:11.164845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:32:19.849 [2024-12-09 05:27:11.164876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:19.849 [2024-12-09 05:27:11.165193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:32:19.849 [2024-12-09 05:27:11.165428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:32:19.849 [2024-12-09 05:27:11.165444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:32:19.849 [2024-12-09 05:27:11.165646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:19.849 pt2 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:19.849 "name": "raid_bdev1", 01:32:19.849 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:19.849 "strip_size_kb": 0, 01:32:19.849 "state": "online", 01:32:19.849 "raid_level": "raid1", 01:32:19.849 "superblock": true, 01:32:19.849 "num_base_bdevs": 2, 01:32:19.849 "num_base_bdevs_discovered": 2, 01:32:19.849 "num_base_bdevs_operational": 2, 01:32:19.849 "base_bdevs_list": [ 01:32:19.849 { 01:32:19.849 "name": "pt1", 01:32:19.849 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:19.849 "is_configured": true, 01:32:19.849 "data_offset": 256, 01:32:19.849 "data_size": 7936 01:32:19.849 }, 01:32:19.849 { 01:32:19.849 "name": "pt2", 01:32:19.849 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:19.849 "is_configured": true, 01:32:19.849 "data_offset": 256, 01:32:19.849 "data_size": 7936 01:32:19.849 } 01:32:19.849 ] 01:32:19.849 }' 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:19.849 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.107 [2024-12-09 05:27:11.699510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:20.107 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:32:20.365 "name": "raid_bdev1", 01:32:20.365 "aliases": [ 01:32:20.365 "fb838a3d-098a-4a4e-ab3f-9450f9cff12e" 01:32:20.365 ], 01:32:20.365 "product_name": "Raid Volume", 01:32:20.365 "block_size": 4096, 01:32:20.365 "num_blocks": 7936, 01:32:20.365 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:20.365 "assigned_rate_limits": { 01:32:20.365 "rw_ios_per_sec": 0, 01:32:20.365 "rw_mbytes_per_sec": 0, 01:32:20.365 "r_mbytes_per_sec": 0, 01:32:20.365 "w_mbytes_per_sec": 0 01:32:20.365 }, 01:32:20.365 "claimed": false, 01:32:20.365 "zoned": false, 01:32:20.365 "supported_io_types": { 01:32:20.365 "read": true, 01:32:20.365 "write": true, 01:32:20.365 "unmap": false, 01:32:20.365 "flush": false, 01:32:20.365 "reset": true, 01:32:20.365 "nvme_admin": false, 01:32:20.365 "nvme_io": false, 01:32:20.365 "nvme_io_md": false, 01:32:20.365 "write_zeroes": true, 01:32:20.365 "zcopy": false, 01:32:20.365 "get_zone_info": false, 01:32:20.365 "zone_management": false, 01:32:20.365 "zone_append": false, 01:32:20.365 "compare": false, 01:32:20.365 "compare_and_write": false, 01:32:20.365 "abort": false, 01:32:20.365 "seek_hole": false, 01:32:20.365 "seek_data": false, 01:32:20.365 "copy": false, 01:32:20.365 "nvme_iov_md": false 01:32:20.365 }, 01:32:20.365 "memory_domains": [ 01:32:20.365 { 01:32:20.365 "dma_device_id": "system", 01:32:20.365 "dma_device_type": 1 01:32:20.365 }, 01:32:20.365 { 01:32:20.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:20.365 "dma_device_type": 2 01:32:20.365 }, 01:32:20.365 { 01:32:20.365 "dma_device_id": "system", 01:32:20.365 "dma_device_type": 1 01:32:20.365 }, 01:32:20.365 { 01:32:20.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:20.365 "dma_device_type": 2 01:32:20.365 } 01:32:20.365 ], 01:32:20.365 "driver_specific": { 01:32:20.365 "raid": { 01:32:20.365 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:20.365 "strip_size_kb": 0, 01:32:20.365 "state": "online", 01:32:20.365 "raid_level": "raid1", 01:32:20.365 "superblock": true, 01:32:20.365 "num_base_bdevs": 2, 01:32:20.365 "num_base_bdevs_discovered": 2, 01:32:20.365 "num_base_bdevs_operational": 2, 01:32:20.365 "base_bdevs_list": [ 01:32:20.365 { 01:32:20.365 "name": "pt1", 01:32:20.365 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:20.365 "is_configured": true, 01:32:20.365 "data_offset": 256, 01:32:20.365 "data_size": 7936 01:32:20.365 }, 01:32:20.365 { 01:32:20.365 "name": "pt2", 01:32:20.365 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:20.365 "is_configured": true, 01:32:20.365 "data_offset": 256, 01:32:20.365 "data_size": 7936 01:32:20.365 } 01:32:20.365 ] 01:32:20.365 } 01:32:20.365 } 01:32:20.365 }' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:32:20.365 pt2' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:20.365 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.365 [2024-12-09 05:27:11.967553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:20.623 05:27:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' fb838a3d-098a-4a4e-ab3f-9450f9cff12e '!=' fb838a3d-098a-4a4e-ab3f-9450f9cff12e ']' 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.623 [2024-12-09 05:27:12.015333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:20.623 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:20.624 "name": "raid_bdev1", 01:32:20.624 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:20.624 "strip_size_kb": 0, 01:32:20.624 "state": "online", 01:32:20.624 "raid_level": "raid1", 01:32:20.624 "superblock": true, 01:32:20.624 "num_base_bdevs": 2, 01:32:20.624 "num_base_bdevs_discovered": 1, 01:32:20.624 "num_base_bdevs_operational": 1, 01:32:20.624 "base_bdevs_list": [ 01:32:20.624 { 01:32:20.624 "name": null, 01:32:20.624 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:20.624 "is_configured": false, 01:32:20.624 "data_offset": 0, 01:32:20.624 "data_size": 7936 01:32:20.624 }, 01:32:20.624 { 01:32:20.624 "name": "pt2", 01:32:20.624 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:20.624 "is_configured": true, 01:32:20.624 "data_offset": 256, 01:32:20.624 "data_size": 7936 01:32:20.624 } 01:32:20.624 ] 01:32:20.624 }' 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:20.624 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.190 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.191 [2024-12-09 05:27:12.539517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:21.191 [2024-12-09 05:27:12.539549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:21.191 [2024-12-09 05:27:12.539645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:21.191 [2024-12-09 05:27:12.539708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:21.191 [2024-12-09 05:27:12.539733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.191 [2024-12-09 05:27:12.611479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:32:21.191 [2024-12-09 05:27:12.612073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:21.191 [2024-12-09 05:27:12.612220] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:32:21.191 [2024-12-09 05:27:12.612315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:21.191 [2024-12-09 05:27:12.615200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:21.191 [2024-12-09 05:27:12.615503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:32:21.191 [2024-12-09 05:27:12.615724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:32:21.191 [2024-12-09 05:27:12.615807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:21.191 [2024-12-09 05:27:12.615996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:32:21.191 [2024-12-09 05:27:12.616019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:21.191 pt2 01:32:21.191 [2024-12-09 05:27:12.616319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.191 [2024-12-09 05:27:12.616609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:32:21.191 [2024-12-09 05:27:12.616631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:21.191 [2024-12-09 05:27:12.616859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:21.191 "name": "raid_bdev1", 01:32:21.191 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:21.191 "strip_size_kb": 0, 01:32:21.191 "state": "online", 01:32:21.191 "raid_level": "raid1", 01:32:21.191 "superblock": true, 01:32:21.191 "num_base_bdevs": 2, 01:32:21.191 "num_base_bdevs_discovered": 1, 01:32:21.191 "num_base_bdevs_operational": 1, 01:32:21.191 "base_bdevs_list": [ 01:32:21.191 { 01:32:21.191 "name": null, 01:32:21.191 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:21.191 "is_configured": false, 01:32:21.191 "data_offset": 256, 01:32:21.191 "data_size": 7936 01:32:21.191 }, 01:32:21.191 { 01:32:21.191 "name": "pt2", 01:32:21.191 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:21.191 "is_configured": true, 01:32:21.191 "data_offset": 256, 01:32:21.191 "data_size": 7936 01:32:21.191 } 01:32:21.191 ] 01:32:21.191 }' 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:21.191 05:27:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.759 [2024-12-09 05:27:13.148015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:21.759 [2024-12-09 05:27:13.148051] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:21.759 [2024-12-09 05:27:13.148144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:21.759 [2024-12-09 05:27:13.148212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:21.759 [2024-12-09 05:27:13.148228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.759 [2024-12-09 05:27:13.216064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:32:21.759 [2024-12-09 05:27:13.216161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:21.759 [2024-12-09 05:27:13.216191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 01:32:21.759 [2024-12-09 05:27:13.216219] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:21.759 [2024-12-09 05:27:13.219141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:21.759 [2024-12-09 05:27:13.219345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:32:21.759 [2024-12-09 05:27:13.219532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:32:21.759 [2024-12-09 05:27:13.219594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:32:21.759 [2024-12-09 05:27:13.219839] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:32:21.759 [2024-12-09 05:27:13.219859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:21.759 [2024-12-09 05:27:13.219882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:32:21.759 [2024-12-09 05:27:13.219953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:21.759 [2024-12-09 05:27:13.220080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:32:21.759 [2024-12-09 05:27:13.220097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:21.759 [2024-12-09 05:27:13.220438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:32:21.759 [2024-12-09 05:27:13.220657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:32:21.759 [2024-12-09 05:27:13.220679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:32:21.759 [2024-12-09 05:27:13.220911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:21.759 pt1 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:21.759 "name": "raid_bdev1", 01:32:21.759 "uuid": "fb838a3d-098a-4a4e-ab3f-9450f9cff12e", 01:32:21.759 "strip_size_kb": 0, 01:32:21.759 "state": "online", 01:32:21.759 "raid_level": "raid1", 01:32:21.759 "superblock": true, 01:32:21.759 "num_base_bdevs": 2, 01:32:21.759 "num_base_bdevs_discovered": 1, 01:32:21.759 "num_base_bdevs_operational": 1, 01:32:21.759 "base_bdevs_list": [ 01:32:21.759 { 01:32:21.759 "name": null, 01:32:21.759 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:21.759 "is_configured": false, 01:32:21.759 "data_offset": 256, 01:32:21.759 "data_size": 7936 01:32:21.759 }, 01:32:21.759 { 01:32:21.759 "name": "pt2", 01:32:21.759 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:21.759 "is_configured": true, 01:32:21.759 "data_offset": 256, 01:32:21.759 "data_size": 7936 01:32:21.759 } 01:32:21.759 ] 01:32:21.759 }' 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:21.759 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:32:22.327 [2024-12-09 05:27:13.792655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' fb838a3d-098a-4a4e-ab3f-9450f9cff12e '!=' fb838a3d-098a-4a4e-ab3f-9450f9cff12e ']' 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86542 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86542 ']' 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86542 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86542 01:32:22.327 killing process with pid 86542 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86542' 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86542 01:32:22.327 [2024-12-09 05:27:13.872384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:32:22.327 [2024-12-09 05:27:13.872580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:22.327 05:27:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86542 01:32:22.327 [2024-12-09 05:27:13.872771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:22.327 [2024-12-09 05:27:13.872940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:32:22.587 [2024-12-09 05:27:14.025184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:32:23.578 ************************************ 01:32:23.578 END TEST raid_superblock_test_4k 01:32:23.578 ************************************ 01:32:23.578 05:27:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 01:32:23.578 01:32:23.578 real 0m6.644s 01:32:23.578 user 0m10.540s 01:32:23.578 sys 0m0.983s 01:32:23.578 05:27:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:23.578 05:27:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 01:32:23.578 05:27:15 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 01:32:23.578 05:27:15 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 01:32:23.578 05:27:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:32:23.578 05:27:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:23.578 05:27:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:32:23.578 ************************************ 01:32:23.578 START TEST raid_rebuild_test_sb_4k 01:32:23.578 ************************************ 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86871 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86871 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86871 ']' 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:32:23.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:23.578 05:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:23.836 I/O size of 3145728 is greater than zero copy threshold (65536). 01:32:23.836 Zero copy mechanism will not be used. 01:32:23.836 [2024-12-09 05:27:15.240854] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:23.836 [2024-12-09 05:27:15.241036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86871 ] 01:32:23.836 [2024-12-09 05:27:15.421697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:24.094 [2024-12-09 05:27:15.541522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:24.352 [2024-12-09 05:27:15.732554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:24.352 [2024-12-09 05:27:15.732635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.611 BaseBdev1_malloc 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.611 [2024-12-09 05:27:16.197552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:32:24.611 [2024-12-09 05:27:16.197644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:24.611 [2024-12-09 05:27:16.197676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:32:24.611 [2024-12-09 05:27:16.197694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:24.611 [2024-12-09 05:27:16.200318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:24.611 [2024-12-09 05:27:16.200412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:32:24.611 BaseBdev1 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.611 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 BaseBdev2_malloc 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 [2024-12-09 05:27:16.250226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:32:24.870 [2024-12-09 05:27:16.250328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:24.870 [2024-12-09 05:27:16.250364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:32:24.870 [2024-12-09 05:27:16.250427] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:24.870 [2024-12-09 05:27:16.253129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:24.870 [2024-12-09 05:27:16.253192] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:32:24.870 BaseBdev2 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 spare_malloc 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 spare_delay 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 [2024-12-09 05:27:16.318942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:32:24.870 [2024-12-09 05:27:16.319024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:24.870 [2024-12-09 05:27:16.319057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:32:24.870 [2024-12-09 05:27:16.319075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:24.870 [2024-12-09 05:27:16.322034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:24.870 [2024-12-09 05:27:16.322088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:32:24.870 spare 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 [2024-12-09 05:27:16.331027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:24.870 [2024-12-09 05:27:16.333347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:32:24.870 [2024-12-09 05:27:16.333775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:32:24.870 [2024-12-09 05:27:16.333971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:24.870 [2024-12-09 05:27:16.334325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:32:24.870 [2024-12-09 05:27:16.334743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:32:24.870 [2024-12-09 05:27:16.334880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:32:24.870 [2024-12-09 05:27:16.335124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:24.870 "name": "raid_bdev1", 01:32:24.870 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:24.870 "strip_size_kb": 0, 01:32:24.870 "state": "online", 01:32:24.870 "raid_level": "raid1", 01:32:24.870 "superblock": true, 01:32:24.870 "num_base_bdevs": 2, 01:32:24.870 "num_base_bdevs_discovered": 2, 01:32:24.870 "num_base_bdevs_operational": 2, 01:32:24.870 "base_bdevs_list": [ 01:32:24.870 { 01:32:24.870 "name": "BaseBdev1", 01:32:24.870 "uuid": "f2d329e1-98ec-5825-a2fd-407a6ad0fed3", 01:32:24.870 "is_configured": true, 01:32:24.870 "data_offset": 256, 01:32:24.870 "data_size": 7936 01:32:24.870 }, 01:32:24.870 { 01:32:24.870 "name": "BaseBdev2", 01:32:24.870 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:24.870 "is_configured": true, 01:32:24.870 "data_offset": 256, 01:32:24.870 "data_size": 7936 01:32:24.870 } 01:32:24.870 ] 01:32:24.870 }' 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:24.870 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:25.437 [2024-12-09 05:27:16.879625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:32:25.437 05:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:32:25.695 [2024-12-09 05:27:17.271512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:32:25.695 /dev/nbd0 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:32:25.953 1+0 records in 01:32:25.953 1+0 records out 01:32:25.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334361 s, 12.3 MB/s 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 01:32:25.953 05:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 01:32:26.888 7936+0 records in 01:32:26.888 7936+0 records out 01:32:26.888 32505856 bytes (33 MB, 31 MiB) copied, 0.950783 s, 34.2 MB/s 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:32:26.888 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:32:27.146 [2024-12-09 05:27:18.583938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 01:32:27.146 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:27.147 [2024-12-09 05:27:18.596024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:27.147 "name": "raid_bdev1", 01:32:27.147 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:27.147 "strip_size_kb": 0, 01:32:27.147 "state": "online", 01:32:27.147 "raid_level": "raid1", 01:32:27.147 "superblock": true, 01:32:27.147 "num_base_bdevs": 2, 01:32:27.147 "num_base_bdevs_discovered": 1, 01:32:27.147 "num_base_bdevs_operational": 1, 01:32:27.147 "base_bdevs_list": [ 01:32:27.147 { 01:32:27.147 "name": null, 01:32:27.147 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:27.147 "is_configured": false, 01:32:27.147 "data_offset": 0, 01:32:27.147 "data_size": 7936 01:32:27.147 }, 01:32:27.147 { 01:32:27.147 "name": "BaseBdev2", 01:32:27.147 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:27.147 "is_configured": true, 01:32:27.147 "data_offset": 256, 01:32:27.147 "data_size": 7936 01:32:27.147 } 01:32:27.147 ] 01:32:27.147 }' 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:27.147 05:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:27.712 05:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:32:27.712 05:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:27.712 05:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:27.712 [2024-12-09 05:27:19.132235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:27.712 [2024-12-09 05:27:19.148526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 01:32:27.712 05:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:27.712 05:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 01:32:27.712 [2024-12-09 05:27:19.151157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:28.646 "name": "raid_bdev1", 01:32:28.646 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:28.646 "strip_size_kb": 0, 01:32:28.646 "state": "online", 01:32:28.646 "raid_level": "raid1", 01:32:28.646 "superblock": true, 01:32:28.646 "num_base_bdevs": 2, 01:32:28.646 "num_base_bdevs_discovered": 2, 01:32:28.646 "num_base_bdevs_operational": 2, 01:32:28.646 "process": { 01:32:28.646 "type": "rebuild", 01:32:28.646 "target": "spare", 01:32:28.646 "progress": { 01:32:28.646 "blocks": 2560, 01:32:28.646 "percent": 32 01:32:28.646 } 01:32:28.646 }, 01:32:28.646 "base_bdevs_list": [ 01:32:28.646 { 01:32:28.646 "name": "spare", 01:32:28.646 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:28.646 "is_configured": true, 01:32:28.646 "data_offset": 256, 01:32:28.646 "data_size": 7936 01:32:28.646 }, 01:32:28.646 { 01:32:28.646 "name": "BaseBdev2", 01:32:28.646 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:28.646 "is_configured": true, 01:32:28.646 "data_offset": 256, 01:32:28.646 "data_size": 7936 01:32:28.646 } 01:32:28.646 ] 01:32:28.646 }' 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:28.646 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:28.905 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:28.905 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:28.905 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:32:28.905 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:28.905 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:28.905 [2024-12-09 05:27:20.316783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:28.905 [2024-12-09 05:27:20.359956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:32:28.905 [2024-12-09 05:27:20.360052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:28.905 [2024-12-09 05:27:20.360075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:28.905 [2024-12-09 05:27:20.360089] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:32:28.905 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:28.906 "name": "raid_bdev1", 01:32:28.906 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:28.906 "strip_size_kb": 0, 01:32:28.906 "state": "online", 01:32:28.906 "raid_level": "raid1", 01:32:28.906 "superblock": true, 01:32:28.906 "num_base_bdevs": 2, 01:32:28.906 "num_base_bdevs_discovered": 1, 01:32:28.906 "num_base_bdevs_operational": 1, 01:32:28.906 "base_bdevs_list": [ 01:32:28.906 { 01:32:28.906 "name": null, 01:32:28.906 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:28.906 "is_configured": false, 01:32:28.906 "data_offset": 0, 01:32:28.906 "data_size": 7936 01:32:28.906 }, 01:32:28.906 { 01:32:28.906 "name": "BaseBdev2", 01:32:28.906 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:28.906 "is_configured": true, 01:32:28.906 "data_offset": 256, 01:32:28.906 "data_size": 7936 01:32:28.906 } 01:32:28.906 ] 01:32:28.906 }' 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:28.906 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:29.473 "name": "raid_bdev1", 01:32:29.473 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:29.473 "strip_size_kb": 0, 01:32:29.473 "state": "online", 01:32:29.473 "raid_level": "raid1", 01:32:29.473 "superblock": true, 01:32:29.473 "num_base_bdevs": 2, 01:32:29.473 "num_base_bdevs_discovered": 1, 01:32:29.473 "num_base_bdevs_operational": 1, 01:32:29.473 "base_bdevs_list": [ 01:32:29.473 { 01:32:29.473 "name": null, 01:32:29.473 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:29.473 "is_configured": false, 01:32:29.473 "data_offset": 0, 01:32:29.473 "data_size": 7936 01:32:29.473 }, 01:32:29.473 { 01:32:29.473 "name": "BaseBdev2", 01:32:29.473 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:29.473 "is_configured": true, 01:32:29.473 "data_offset": 256, 01:32:29.473 "data_size": 7936 01:32:29.473 } 01:32:29.473 ] 01:32:29.473 }' 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:29.473 05:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:29.473 [2024-12-09 05:27:21.060830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:29.473 [2024-12-09 05:27:21.076994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:29.473 05:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 01:32:29.473 [2024-12-09 05:27:21.079712] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:30.912 "name": "raid_bdev1", 01:32:30.912 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:30.912 "strip_size_kb": 0, 01:32:30.912 "state": "online", 01:32:30.912 "raid_level": "raid1", 01:32:30.912 "superblock": true, 01:32:30.912 "num_base_bdevs": 2, 01:32:30.912 "num_base_bdevs_discovered": 2, 01:32:30.912 "num_base_bdevs_operational": 2, 01:32:30.912 "process": { 01:32:30.912 "type": "rebuild", 01:32:30.912 "target": "spare", 01:32:30.912 "progress": { 01:32:30.912 "blocks": 2560, 01:32:30.912 "percent": 32 01:32:30.912 } 01:32:30.912 }, 01:32:30.912 "base_bdevs_list": [ 01:32:30.912 { 01:32:30.912 "name": "spare", 01:32:30.912 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:30.912 "is_configured": true, 01:32:30.912 "data_offset": 256, 01:32:30.912 "data_size": 7936 01:32:30.912 }, 01:32:30.912 { 01:32:30.912 "name": "BaseBdev2", 01:32:30.912 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:30.912 "is_configured": true, 01:32:30.912 "data_offset": 256, 01:32:30.912 "data_size": 7936 01:32:30.912 } 01:32:30.912 ] 01:32:30.912 }' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:32:30.912 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=744 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:30.912 "name": "raid_bdev1", 01:32:30.912 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:30.912 "strip_size_kb": 0, 01:32:30.912 "state": "online", 01:32:30.912 "raid_level": "raid1", 01:32:30.912 "superblock": true, 01:32:30.912 "num_base_bdevs": 2, 01:32:30.912 "num_base_bdevs_discovered": 2, 01:32:30.912 "num_base_bdevs_operational": 2, 01:32:30.912 "process": { 01:32:30.912 "type": "rebuild", 01:32:30.912 "target": "spare", 01:32:30.912 "progress": { 01:32:30.912 "blocks": 2816, 01:32:30.912 "percent": 35 01:32:30.912 } 01:32:30.912 }, 01:32:30.912 "base_bdevs_list": [ 01:32:30.912 { 01:32:30.912 "name": "spare", 01:32:30.912 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:30.912 "is_configured": true, 01:32:30.912 "data_offset": 256, 01:32:30.912 "data_size": 7936 01:32:30.912 }, 01:32:30.912 { 01:32:30.912 "name": "BaseBdev2", 01:32:30.912 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:30.912 "is_configured": true, 01:32:30.912 "data_offset": 256, 01:32:30.912 "data_size": 7936 01:32:30.912 } 01:32:30.912 ] 01:32:30.912 }' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:30.912 05:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:31.861 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:32.120 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:32.120 "name": "raid_bdev1", 01:32:32.120 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:32.120 "strip_size_kb": 0, 01:32:32.120 "state": "online", 01:32:32.120 "raid_level": "raid1", 01:32:32.120 "superblock": true, 01:32:32.120 "num_base_bdevs": 2, 01:32:32.120 "num_base_bdevs_discovered": 2, 01:32:32.120 "num_base_bdevs_operational": 2, 01:32:32.120 "process": { 01:32:32.120 "type": "rebuild", 01:32:32.120 "target": "spare", 01:32:32.120 "progress": { 01:32:32.120 "blocks": 5888, 01:32:32.120 "percent": 74 01:32:32.120 } 01:32:32.120 }, 01:32:32.120 "base_bdevs_list": [ 01:32:32.120 { 01:32:32.120 "name": "spare", 01:32:32.120 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:32.120 "is_configured": true, 01:32:32.120 "data_offset": 256, 01:32:32.120 "data_size": 7936 01:32:32.120 }, 01:32:32.120 { 01:32:32.120 "name": "BaseBdev2", 01:32:32.120 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:32.120 "is_configured": true, 01:32:32.120 "data_offset": 256, 01:32:32.120 "data_size": 7936 01:32:32.120 } 01:32:32.120 ] 01:32:32.120 }' 01:32:32.120 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:32.120 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:32.120 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:32.120 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:32.120 05:27:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 01:32:32.688 [2024-12-09 05:27:24.202226] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:32:32.688 [2024-12-09 05:27:24.202564] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:32:32.688 [2024-12-09 05:27:24.202732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:33.257 "name": "raid_bdev1", 01:32:33.257 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:33.257 "strip_size_kb": 0, 01:32:33.257 "state": "online", 01:32:33.257 "raid_level": "raid1", 01:32:33.257 "superblock": true, 01:32:33.257 "num_base_bdevs": 2, 01:32:33.257 "num_base_bdevs_discovered": 2, 01:32:33.257 "num_base_bdevs_operational": 2, 01:32:33.257 "base_bdevs_list": [ 01:32:33.257 { 01:32:33.257 "name": "spare", 01:32:33.257 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:33.257 "is_configured": true, 01:32:33.257 "data_offset": 256, 01:32:33.257 "data_size": 7936 01:32:33.257 }, 01:32:33.257 { 01:32:33.257 "name": "BaseBdev2", 01:32:33.257 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:33.257 "is_configured": true, 01:32:33.257 "data_offset": 256, 01:32:33.257 "data_size": 7936 01:32:33.257 } 01:32:33.257 ] 01:32:33.257 }' 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:33.257 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:33.258 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:33.258 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:33.258 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:33.258 "name": "raid_bdev1", 01:32:33.258 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:33.258 "strip_size_kb": 0, 01:32:33.258 "state": "online", 01:32:33.258 "raid_level": "raid1", 01:32:33.258 "superblock": true, 01:32:33.258 "num_base_bdevs": 2, 01:32:33.258 "num_base_bdevs_discovered": 2, 01:32:33.258 "num_base_bdevs_operational": 2, 01:32:33.258 "base_bdevs_list": [ 01:32:33.258 { 01:32:33.258 "name": "spare", 01:32:33.258 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:33.258 "is_configured": true, 01:32:33.258 "data_offset": 256, 01:32:33.258 "data_size": 7936 01:32:33.258 }, 01:32:33.258 { 01:32:33.258 "name": "BaseBdev2", 01:32:33.258 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:33.258 "is_configured": true, 01:32:33.258 "data_offset": 256, 01:32:33.258 "data_size": 7936 01:32:33.258 } 01:32:33.258 ] 01:32:33.258 }' 01:32:33.258 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:33.517 "name": "raid_bdev1", 01:32:33.517 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:33.517 "strip_size_kb": 0, 01:32:33.517 "state": "online", 01:32:33.517 "raid_level": "raid1", 01:32:33.517 "superblock": true, 01:32:33.517 "num_base_bdevs": 2, 01:32:33.517 "num_base_bdevs_discovered": 2, 01:32:33.517 "num_base_bdevs_operational": 2, 01:32:33.517 "base_bdevs_list": [ 01:32:33.517 { 01:32:33.517 "name": "spare", 01:32:33.517 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:33.517 "is_configured": true, 01:32:33.517 "data_offset": 256, 01:32:33.517 "data_size": 7936 01:32:33.517 }, 01:32:33.517 { 01:32:33.517 "name": "BaseBdev2", 01:32:33.517 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:33.517 "is_configured": true, 01:32:33.517 "data_offset": 256, 01:32:33.517 "data_size": 7936 01:32:33.517 } 01:32:33.517 ] 01:32:33.517 }' 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:33.517 05:27:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:34.083 [2024-12-09 05:27:25.465178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:34.083 [2024-12-09 05:27:25.465214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:34.083 [2024-12-09 05:27:25.465328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:34.083 [2024-12-09 05:27:25.465482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:34.083 [2024-12-09 05:27:25.465504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 01:32:34.083 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 01:32:34.084 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:32:34.084 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:32:34.084 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:32:34.349 /dev/nbd0 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:32:34.349 1+0 records in 01:32:34.349 1+0 records out 01:32:34.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453217 s, 9.0 MB/s 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:32:34.349 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:32:34.350 05:27:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:32:34.609 /dev/nbd1 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:32:34.609 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:32:34.610 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:32:34.610 1+0 records in 01:32:34.610 1+0 records out 01:32:34.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035664 s, 11.5 MB/s 01:32:34.610 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:32:34.868 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:32:35.125 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 01:32:35.126 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 01:32:35.126 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:32:35.126 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:35.383 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:35.383 [2024-12-09 05:27:26.983647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:32:35.383 [2024-12-09 05:27:26.983726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:35.383 [2024-12-09 05:27:26.983777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:32:35.384 [2024-12-09 05:27:26.983793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:35.384 [2024-12-09 05:27:26.987186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:35.384 [2024-12-09 05:27:26.987230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:32:35.384 [2024-12-09 05:27:26.987354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:32:35.384 [2024-12-09 05:27:26.987454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:35.384 [2024-12-09 05:27:26.987656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:32:35.384 spare 01:32:35.384 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:35.384 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:32:35.384 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:35.384 05:27:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:35.641 [2024-12-09 05:27:27.087864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:32:35.641 [2024-12-09 05:27:27.087898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:35.641 [2024-12-09 05:27:27.088253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 01:32:35.641 [2024-12-09 05:27:27.088685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:32:35.641 [2024-12-09 05:27:27.088741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:32:35.641 [2024-12-09 05:27:27.089085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:35.641 "name": "raid_bdev1", 01:32:35.641 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:35.641 "strip_size_kb": 0, 01:32:35.641 "state": "online", 01:32:35.641 "raid_level": "raid1", 01:32:35.641 "superblock": true, 01:32:35.641 "num_base_bdevs": 2, 01:32:35.641 "num_base_bdevs_discovered": 2, 01:32:35.641 "num_base_bdevs_operational": 2, 01:32:35.641 "base_bdevs_list": [ 01:32:35.641 { 01:32:35.641 "name": "spare", 01:32:35.641 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:35.641 "is_configured": true, 01:32:35.641 "data_offset": 256, 01:32:35.641 "data_size": 7936 01:32:35.641 }, 01:32:35.641 { 01:32:35.641 "name": "BaseBdev2", 01:32:35.641 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:35.641 "is_configured": true, 01:32:35.641 "data_offset": 256, 01:32:35.641 "data_size": 7936 01:32:35.641 } 01:32:35.641 ] 01:32:35.641 }' 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:35.641 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:36.207 "name": "raid_bdev1", 01:32:36.207 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:36.207 "strip_size_kb": 0, 01:32:36.207 "state": "online", 01:32:36.207 "raid_level": "raid1", 01:32:36.207 "superblock": true, 01:32:36.207 "num_base_bdevs": 2, 01:32:36.207 "num_base_bdevs_discovered": 2, 01:32:36.207 "num_base_bdevs_operational": 2, 01:32:36.207 "base_bdevs_list": [ 01:32:36.207 { 01:32:36.207 "name": "spare", 01:32:36.207 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:36.207 "is_configured": true, 01:32:36.207 "data_offset": 256, 01:32:36.207 "data_size": 7936 01:32:36.207 }, 01:32:36.207 { 01:32:36.207 "name": "BaseBdev2", 01:32:36.207 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:36.207 "is_configured": true, 01:32:36.207 "data_offset": 256, 01:32:36.207 "data_size": 7936 01:32:36.207 } 01:32:36.207 ] 01:32:36.207 }' 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:36.207 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:36.464 [2024-12-09 05:27:27.860069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:36.464 "name": "raid_bdev1", 01:32:36.464 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:36.464 "strip_size_kb": 0, 01:32:36.464 "state": "online", 01:32:36.464 "raid_level": "raid1", 01:32:36.464 "superblock": true, 01:32:36.464 "num_base_bdevs": 2, 01:32:36.464 "num_base_bdevs_discovered": 1, 01:32:36.464 "num_base_bdevs_operational": 1, 01:32:36.464 "base_bdevs_list": [ 01:32:36.464 { 01:32:36.464 "name": null, 01:32:36.464 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:36.464 "is_configured": false, 01:32:36.464 "data_offset": 0, 01:32:36.464 "data_size": 7936 01:32:36.464 }, 01:32:36.464 { 01:32:36.464 "name": "BaseBdev2", 01:32:36.464 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:36.464 "is_configured": true, 01:32:36.464 "data_offset": 256, 01:32:36.464 "data_size": 7936 01:32:36.464 } 01:32:36.464 ] 01:32:36.464 }' 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:36.464 05:27:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:37.028 05:27:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:32:37.028 05:27:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:37.028 05:27:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:37.028 [2024-12-09 05:27:28.420333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:37.028 [2024-12-09 05:27:28.420672] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:32:37.028 [2024-12-09 05:27:28.420699] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:32:37.028 [2024-12-09 05:27:28.420761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:37.028 [2024-12-09 05:27:28.436724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 01:32:37.028 05:27:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:37.028 05:27:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 01:32:37.028 [2024-12-09 05:27:28.439463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:37.961 "name": "raid_bdev1", 01:32:37.961 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:37.961 "strip_size_kb": 0, 01:32:37.961 "state": "online", 01:32:37.961 "raid_level": "raid1", 01:32:37.961 "superblock": true, 01:32:37.961 "num_base_bdevs": 2, 01:32:37.961 "num_base_bdevs_discovered": 2, 01:32:37.961 "num_base_bdevs_operational": 2, 01:32:37.961 "process": { 01:32:37.961 "type": "rebuild", 01:32:37.961 "target": "spare", 01:32:37.961 "progress": { 01:32:37.961 "blocks": 2560, 01:32:37.961 "percent": 32 01:32:37.961 } 01:32:37.961 }, 01:32:37.961 "base_bdevs_list": [ 01:32:37.961 { 01:32:37.961 "name": "spare", 01:32:37.961 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:37.961 "is_configured": true, 01:32:37.961 "data_offset": 256, 01:32:37.961 "data_size": 7936 01:32:37.961 }, 01:32:37.961 { 01:32:37.961 "name": "BaseBdev2", 01:32:37.961 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:37.961 "is_configured": true, 01:32:37.961 "data_offset": 256, 01:32:37.961 "data_size": 7936 01:32:37.961 } 01:32:37.961 ] 01:32:37.961 }' 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:37.961 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:38.220 [2024-12-09 05:27:29.609440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:38.220 [2024-12-09 05:27:29.647923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:32:38.220 [2024-12-09 05:27:29.648047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:38.220 [2024-12-09 05:27:29.648070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:38.220 [2024-12-09 05:27:29.648084] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:38.220 "name": "raid_bdev1", 01:32:38.220 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:38.220 "strip_size_kb": 0, 01:32:38.220 "state": "online", 01:32:38.220 "raid_level": "raid1", 01:32:38.220 "superblock": true, 01:32:38.220 "num_base_bdevs": 2, 01:32:38.220 "num_base_bdevs_discovered": 1, 01:32:38.220 "num_base_bdevs_operational": 1, 01:32:38.220 "base_bdevs_list": [ 01:32:38.220 { 01:32:38.220 "name": null, 01:32:38.220 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:38.220 "is_configured": false, 01:32:38.220 "data_offset": 0, 01:32:38.220 "data_size": 7936 01:32:38.220 }, 01:32:38.220 { 01:32:38.220 "name": "BaseBdev2", 01:32:38.220 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:38.220 "is_configured": true, 01:32:38.220 "data_offset": 256, 01:32:38.220 "data_size": 7936 01:32:38.220 } 01:32:38.220 ] 01:32:38.220 }' 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:38.220 05:27:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:38.801 05:27:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:32:38.801 05:27:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:38.801 05:27:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:38.801 [2024-12-09 05:27:30.226194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:32:38.801 [2024-12-09 05:27:30.226273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:38.801 [2024-12-09 05:27:30.226305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:32:38.801 [2024-12-09 05:27:30.226323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:38.801 [2024-12-09 05:27:30.227018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:38.801 [2024-12-09 05:27:30.227056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:32:38.801 [2024-12-09 05:27:30.227171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:32:38.801 [2024-12-09 05:27:30.227210] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:32:38.801 [2024-12-09 05:27:30.227226] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:32:38.801 [2024-12-09 05:27:30.227259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:32:38.801 [2024-12-09 05:27:30.243527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 01:32:38.801 spare 01:32:38.801 05:27:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:38.801 05:27:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 01:32:38.801 [2024-12-09 05:27:30.246193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:39.763 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:39.763 "name": "raid_bdev1", 01:32:39.763 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:39.763 "strip_size_kb": 0, 01:32:39.763 "state": "online", 01:32:39.763 "raid_level": "raid1", 01:32:39.763 "superblock": true, 01:32:39.763 "num_base_bdevs": 2, 01:32:39.763 "num_base_bdevs_discovered": 2, 01:32:39.763 "num_base_bdevs_operational": 2, 01:32:39.763 "process": { 01:32:39.763 "type": "rebuild", 01:32:39.763 "target": "spare", 01:32:39.763 "progress": { 01:32:39.763 "blocks": 2560, 01:32:39.763 "percent": 32 01:32:39.763 } 01:32:39.763 }, 01:32:39.763 "base_bdevs_list": [ 01:32:39.763 { 01:32:39.763 "name": "spare", 01:32:39.763 "uuid": "795f6c20-bd59-599e-bb74-1cba90fb4c16", 01:32:39.763 "is_configured": true, 01:32:39.763 "data_offset": 256, 01:32:39.763 "data_size": 7936 01:32:39.763 }, 01:32:39.763 { 01:32:39.763 "name": "BaseBdev2", 01:32:39.763 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:39.763 "is_configured": true, 01:32:39.763 "data_offset": 256, 01:32:39.763 "data_size": 7936 01:32:39.763 } 01:32:39.763 ] 01:32:39.763 }' 01:32:39.764 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:39.764 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:32:39.764 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:40.023 [2024-12-09 05:27:31.411788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:40.023 [2024-12-09 05:27:31.455635] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:32:40.023 [2024-12-09 05:27:31.455730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:40.023 [2024-12-09 05:27:31.455759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:32:40.023 [2024-12-09 05:27:31.455787] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:40.023 "name": "raid_bdev1", 01:32:40.023 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:40.023 "strip_size_kb": 0, 01:32:40.023 "state": "online", 01:32:40.023 "raid_level": "raid1", 01:32:40.023 "superblock": true, 01:32:40.023 "num_base_bdevs": 2, 01:32:40.023 "num_base_bdevs_discovered": 1, 01:32:40.023 "num_base_bdevs_operational": 1, 01:32:40.023 "base_bdevs_list": [ 01:32:40.023 { 01:32:40.023 "name": null, 01:32:40.023 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:40.023 "is_configured": false, 01:32:40.023 "data_offset": 0, 01:32:40.023 "data_size": 7936 01:32:40.023 }, 01:32:40.023 { 01:32:40.023 "name": "BaseBdev2", 01:32:40.023 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:40.023 "is_configured": true, 01:32:40.023 "data_offset": 256, 01:32:40.023 "data_size": 7936 01:32:40.023 } 01:32:40.023 ] 01:32:40.023 }' 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:40.023 05:27:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:40.591 "name": "raid_bdev1", 01:32:40.591 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:40.591 "strip_size_kb": 0, 01:32:40.591 "state": "online", 01:32:40.591 "raid_level": "raid1", 01:32:40.591 "superblock": true, 01:32:40.591 "num_base_bdevs": 2, 01:32:40.591 "num_base_bdevs_discovered": 1, 01:32:40.591 "num_base_bdevs_operational": 1, 01:32:40.591 "base_bdevs_list": [ 01:32:40.591 { 01:32:40.591 "name": null, 01:32:40.591 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:40.591 "is_configured": false, 01:32:40.591 "data_offset": 0, 01:32:40.591 "data_size": 7936 01:32:40.591 }, 01:32:40.591 { 01:32:40.591 "name": "BaseBdev2", 01:32:40.591 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:40.591 "is_configured": true, 01:32:40.591 "data_offset": 256, 01:32:40.591 "data_size": 7936 01:32:40.591 } 01:32:40.591 ] 01:32:40.591 }' 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:40.591 [2024-12-09 05:27:32.179661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:32:40.591 [2024-12-09 05:27:32.179966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:40.591 [2024-12-09 05:27:32.180037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:32:40.591 [2024-12-09 05:27:32.180068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:40.591 [2024-12-09 05:27:32.180839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:40.591 [2024-12-09 05:27:32.180874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:32:40.591 [2024-12-09 05:27:32.181004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:32:40.591 [2024-12-09 05:27:32.181031] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:32:40.591 [2024-12-09 05:27:32.181046] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:32:40.591 [2024-12-09 05:27:32.181061] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:32:40.591 BaseBdev1 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:40.591 05:27:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:41.969 "name": "raid_bdev1", 01:32:41.969 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:41.969 "strip_size_kb": 0, 01:32:41.969 "state": "online", 01:32:41.969 "raid_level": "raid1", 01:32:41.969 "superblock": true, 01:32:41.969 "num_base_bdevs": 2, 01:32:41.969 "num_base_bdevs_discovered": 1, 01:32:41.969 "num_base_bdevs_operational": 1, 01:32:41.969 "base_bdevs_list": [ 01:32:41.969 { 01:32:41.969 "name": null, 01:32:41.969 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:41.969 "is_configured": false, 01:32:41.969 "data_offset": 0, 01:32:41.969 "data_size": 7936 01:32:41.969 }, 01:32:41.969 { 01:32:41.969 "name": "BaseBdev2", 01:32:41.969 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:41.969 "is_configured": true, 01:32:41.969 "data_offset": 256, 01:32:41.969 "data_size": 7936 01:32:41.969 } 01:32:41.969 ] 01:32:41.969 }' 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:41.969 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:42.228 "name": "raid_bdev1", 01:32:42.228 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:42.228 "strip_size_kb": 0, 01:32:42.228 "state": "online", 01:32:42.228 "raid_level": "raid1", 01:32:42.228 "superblock": true, 01:32:42.228 "num_base_bdevs": 2, 01:32:42.228 "num_base_bdevs_discovered": 1, 01:32:42.228 "num_base_bdevs_operational": 1, 01:32:42.228 "base_bdevs_list": [ 01:32:42.228 { 01:32:42.228 "name": null, 01:32:42.228 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:42.228 "is_configured": false, 01:32:42.228 "data_offset": 0, 01:32:42.228 "data_size": 7936 01:32:42.228 }, 01:32:42.228 { 01:32:42.228 "name": "BaseBdev2", 01:32:42.228 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:42.228 "is_configured": true, 01:32:42.228 "data_offset": 256, 01:32:42.228 "data_size": 7936 01:32:42.228 } 01:32:42.228 ] 01:32:42.228 }' 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:42.228 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:42.487 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:42.487 [2024-12-09 05:27:33.884582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:42.487 [2024-12-09 05:27:33.884894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:32:42.487 [2024-12-09 05:27:33.884920] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:32:42.487 request: 01:32:42.487 { 01:32:42.487 "base_bdev": "BaseBdev1", 01:32:42.487 "raid_bdev": "raid_bdev1", 01:32:42.487 "method": "bdev_raid_add_base_bdev", 01:32:42.487 "req_id": 1 01:32:42.487 } 01:32:42.487 Got JSON-RPC error response 01:32:42.487 response: 01:32:42.487 { 01:32:42.487 "code": -22, 01:32:42.487 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:32:42.487 } 01:32:42.488 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:32:42.488 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 01:32:42.488 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:32:42.488 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:32:42.488 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:32:42.488 05:27:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:43.424 "name": "raid_bdev1", 01:32:43.424 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:43.424 "strip_size_kb": 0, 01:32:43.424 "state": "online", 01:32:43.424 "raid_level": "raid1", 01:32:43.424 "superblock": true, 01:32:43.424 "num_base_bdevs": 2, 01:32:43.424 "num_base_bdevs_discovered": 1, 01:32:43.424 "num_base_bdevs_operational": 1, 01:32:43.424 "base_bdevs_list": [ 01:32:43.424 { 01:32:43.424 "name": null, 01:32:43.424 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:43.424 "is_configured": false, 01:32:43.424 "data_offset": 0, 01:32:43.424 "data_size": 7936 01:32:43.424 }, 01:32:43.424 { 01:32:43.424 "name": "BaseBdev2", 01:32:43.424 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:43.424 "is_configured": true, 01:32:43.424 "data_offset": 256, 01:32:43.424 "data_size": 7936 01:32:43.424 } 01:32:43.424 ] 01:32:43.424 }' 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:43.424 05:27:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:32:43.990 "name": "raid_bdev1", 01:32:43.990 "uuid": "f4476f99-2a97-4c99-ae1a-2e5c526ceebd", 01:32:43.990 "strip_size_kb": 0, 01:32:43.990 "state": "online", 01:32:43.990 "raid_level": "raid1", 01:32:43.990 "superblock": true, 01:32:43.990 "num_base_bdevs": 2, 01:32:43.990 "num_base_bdevs_discovered": 1, 01:32:43.990 "num_base_bdevs_operational": 1, 01:32:43.990 "base_bdevs_list": [ 01:32:43.990 { 01:32:43.990 "name": null, 01:32:43.990 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:43.990 "is_configured": false, 01:32:43.990 "data_offset": 0, 01:32:43.990 "data_size": 7936 01:32:43.990 }, 01:32:43.990 { 01:32:43.990 "name": "BaseBdev2", 01:32:43.990 "uuid": "e743b2d5-6de3-599a-a2d0-8a1da86441df", 01:32:43.990 "is_configured": true, 01:32:43.990 "data_offset": 256, 01:32:43.990 "data_size": 7936 01:32:43.990 } 01:32:43.990 ] 01:32:43.990 }' 01:32:43.990 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:32:43.991 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:32:43.991 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86871 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86871 ']' 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86871 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86871 01:32:44.249 killing process with pid 86871 01:32:44.249 Received shutdown signal, test time was about 60.000000 seconds 01:32:44.249 01:32:44.249 Latency(us) 01:32:44.249 [2024-12-09T05:27:35.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:44.249 [2024-12-09T05:27:35.866Z] =================================================================================================================== 01:32:44.249 [2024-12-09T05:27:35.866Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86871' 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86871 01:32:44.249 [2024-12-09 05:27:35.644906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:32:44.249 05:27:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86871 01:32:44.249 [2024-12-09 05:27:35.645126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:44.249 [2024-12-09 05:27:35.645248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:44.249 [2024-12-09 05:27:35.645273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:32:44.507 [2024-12-09 05:27:35.894345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:32:45.441 05:27:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 01:32:45.441 01:32:45.441 real 0m21.871s 01:32:45.441 user 0m29.602s 01:32:45.441 sys 0m2.548s 01:32:45.441 ************************************ 01:32:45.441 END TEST raid_rebuild_test_sb_4k 01:32:45.441 ************************************ 01:32:45.441 05:27:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:45.441 05:27:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 01:32:45.441 05:27:37 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 01:32:45.441 05:27:37 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 01:32:45.441 05:27:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:32:45.441 05:27:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:45.441 05:27:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:32:45.699 ************************************ 01:32:45.699 START TEST raid_state_function_test_sb_md_separate 01:32:45.699 ************************************ 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87574 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:32:45.699 Process raid pid: 87574 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87574' 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87574 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87574 ']' 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:45.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:45.699 05:27:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:45.699 [2024-12-09 05:27:37.183973] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:45.699 [2024-12-09 05:27:37.184381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:32:45.956 [2024-12-09 05:27:37.371458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:45.956 [2024-12-09 05:27:37.489006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:46.213 [2024-12-09 05:27:37.696582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:46.213 [2024-12-09 05:27:37.696634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:46.778 [2024-12-09 05:27:38.168301] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:32:46.778 [2024-12-09 05:27:38.168550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:32:46.778 [2024-12-09 05:27:38.168577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:32:46.778 [2024-12-09 05:27:38.168596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:46.778 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:46.779 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:46.779 "name": "Existed_Raid", 01:32:46.779 "uuid": "7a07b799-24e7-481e-8459-d3bf48ebd266", 01:32:46.779 "strip_size_kb": 0, 01:32:46.779 "state": "configuring", 01:32:46.779 "raid_level": "raid1", 01:32:46.779 "superblock": true, 01:32:46.779 "num_base_bdevs": 2, 01:32:46.779 "num_base_bdevs_discovered": 0, 01:32:46.779 "num_base_bdevs_operational": 2, 01:32:46.779 "base_bdevs_list": [ 01:32:46.779 { 01:32:46.779 "name": "BaseBdev1", 01:32:46.779 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:46.779 "is_configured": false, 01:32:46.779 "data_offset": 0, 01:32:46.779 "data_size": 0 01:32:46.779 }, 01:32:46.779 { 01:32:46.779 "name": "BaseBdev2", 01:32:46.779 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:46.779 "is_configured": false, 01:32:46.779 "data_offset": 0, 01:32:46.779 "data_size": 0 01:32:46.779 } 01:32:46.779 ] 01:32:46.779 }' 01:32:46.779 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:46.779 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.366 [2024-12-09 05:27:38.688426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:32:47.366 [2024-12-09 05:27:38.688465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.366 [2024-12-09 05:27:38.700419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:32:47.366 [2024-12-09 05:27:38.700610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:32:47.366 [2024-12-09 05:27:38.700754] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:32:47.366 [2024-12-09 05:27:38.700885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.366 [2024-12-09 05:27:38.743736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:47.366 BaseBdev1 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.366 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.366 [ 01:32:47.366 { 01:32:47.366 "name": "BaseBdev1", 01:32:47.366 "aliases": [ 01:32:47.366 "ff3c6338-cac5-48f0-bfc2-cdf64723f3c7" 01:32:47.366 ], 01:32:47.366 "product_name": "Malloc disk", 01:32:47.366 "block_size": 4096, 01:32:47.366 "num_blocks": 8192, 01:32:47.366 "uuid": "ff3c6338-cac5-48f0-bfc2-cdf64723f3c7", 01:32:47.366 "md_size": 32, 01:32:47.366 "md_interleave": false, 01:32:47.366 "dif_type": 0, 01:32:47.366 "assigned_rate_limits": { 01:32:47.366 "rw_ios_per_sec": 0, 01:32:47.366 "rw_mbytes_per_sec": 0, 01:32:47.366 "r_mbytes_per_sec": 0, 01:32:47.366 "w_mbytes_per_sec": 0 01:32:47.366 }, 01:32:47.366 "claimed": true, 01:32:47.366 "claim_type": "exclusive_write", 01:32:47.366 "zoned": false, 01:32:47.366 "supported_io_types": { 01:32:47.366 "read": true, 01:32:47.366 "write": true, 01:32:47.366 "unmap": true, 01:32:47.366 "flush": true, 01:32:47.366 "reset": true, 01:32:47.366 "nvme_admin": false, 01:32:47.366 "nvme_io": false, 01:32:47.366 "nvme_io_md": false, 01:32:47.366 "write_zeroes": true, 01:32:47.366 "zcopy": true, 01:32:47.366 "get_zone_info": false, 01:32:47.366 "zone_management": false, 01:32:47.366 "zone_append": false, 01:32:47.366 "compare": false, 01:32:47.366 "compare_and_write": false, 01:32:47.366 "abort": true, 01:32:47.366 "seek_hole": false, 01:32:47.366 "seek_data": false, 01:32:47.366 "copy": true, 01:32:47.366 "nvme_iov_md": false 01:32:47.366 }, 01:32:47.366 "memory_domains": [ 01:32:47.366 { 01:32:47.366 "dma_device_id": "system", 01:32:47.366 "dma_device_type": 1 01:32:47.366 }, 01:32:47.366 { 01:32:47.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:47.367 "dma_device_type": 2 01:32:47.367 } 01:32:47.367 ], 01:32:47.367 "driver_specific": {} 01:32:47.367 } 01:32:47.367 ] 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:47.367 "name": "Existed_Raid", 01:32:47.367 "uuid": "98e0c70a-7fd6-4a0d-ab2a-842542de005b", 01:32:47.367 "strip_size_kb": 0, 01:32:47.367 "state": "configuring", 01:32:47.367 "raid_level": "raid1", 01:32:47.367 "superblock": true, 01:32:47.367 "num_base_bdevs": 2, 01:32:47.367 "num_base_bdevs_discovered": 1, 01:32:47.367 "num_base_bdevs_operational": 2, 01:32:47.367 "base_bdevs_list": [ 01:32:47.367 { 01:32:47.367 "name": "BaseBdev1", 01:32:47.367 "uuid": "ff3c6338-cac5-48f0-bfc2-cdf64723f3c7", 01:32:47.367 "is_configured": true, 01:32:47.367 "data_offset": 256, 01:32:47.367 "data_size": 7936 01:32:47.367 }, 01:32:47.367 { 01:32:47.367 "name": "BaseBdev2", 01:32:47.367 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:47.367 "is_configured": false, 01:32:47.367 "data_offset": 0, 01:32:47.367 "data_size": 0 01:32:47.367 } 01:32:47.367 ] 01:32:47.367 }' 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:47.367 05:27:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.932 [2024-12-09 05:27:39.280211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:32:47.932 [2024-12-09 05:27:39.280267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.932 [2024-12-09 05:27:39.288282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:47.932 [2024-12-09 05:27:39.291245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:32:47.932 [2024-12-09 05:27:39.291445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:32:47.932 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:47.933 "name": "Existed_Raid", 01:32:47.933 "uuid": "67da8d31-49b9-4ced-92e3-ef5ccebadf5d", 01:32:47.933 "strip_size_kb": 0, 01:32:47.933 "state": "configuring", 01:32:47.933 "raid_level": "raid1", 01:32:47.933 "superblock": true, 01:32:47.933 "num_base_bdevs": 2, 01:32:47.933 "num_base_bdevs_discovered": 1, 01:32:47.933 "num_base_bdevs_operational": 2, 01:32:47.933 "base_bdevs_list": [ 01:32:47.933 { 01:32:47.933 "name": "BaseBdev1", 01:32:47.933 "uuid": "ff3c6338-cac5-48f0-bfc2-cdf64723f3c7", 01:32:47.933 "is_configured": true, 01:32:47.933 "data_offset": 256, 01:32:47.933 "data_size": 7936 01:32:47.933 }, 01:32:47.933 { 01:32:47.933 "name": "BaseBdev2", 01:32:47.933 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:47.933 "is_configured": false, 01:32:47.933 "data_offset": 0, 01:32:47.933 "data_size": 0 01:32:47.933 } 01:32:47.933 ] 01:32:47.933 }' 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:47.933 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:48.191 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 01:32:48.191 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:48.191 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:48.450 [2024-12-09 05:27:39.857162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:32:48.450 [2024-12-09 05:27:39.857483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:32:48.450 [2024-12-09 05:27:39.857506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:48.450 [2024-12-09 05:27:39.857606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:32:48.450 BaseBdev2 01:32:48.450 [2024-12-09 05:27:39.857880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:32:48.450 [2024-12-09 05:27:39.857902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:32:48.450 [2024-12-09 05:27:39.858018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:48.450 [ 01:32:48.450 { 01:32:48.450 "name": "BaseBdev2", 01:32:48.450 "aliases": [ 01:32:48.450 "34899f20-9fa4-4b2a-9c48-6378d9c5cd41" 01:32:48.450 ], 01:32:48.450 "product_name": "Malloc disk", 01:32:48.450 "block_size": 4096, 01:32:48.450 "num_blocks": 8192, 01:32:48.450 "uuid": "34899f20-9fa4-4b2a-9c48-6378d9c5cd41", 01:32:48.450 "md_size": 32, 01:32:48.450 "md_interleave": false, 01:32:48.450 "dif_type": 0, 01:32:48.450 "assigned_rate_limits": { 01:32:48.450 "rw_ios_per_sec": 0, 01:32:48.450 "rw_mbytes_per_sec": 0, 01:32:48.450 "r_mbytes_per_sec": 0, 01:32:48.450 "w_mbytes_per_sec": 0 01:32:48.450 }, 01:32:48.450 "claimed": true, 01:32:48.450 "claim_type": "exclusive_write", 01:32:48.450 "zoned": false, 01:32:48.450 "supported_io_types": { 01:32:48.450 "read": true, 01:32:48.450 "write": true, 01:32:48.450 "unmap": true, 01:32:48.450 "flush": true, 01:32:48.450 "reset": true, 01:32:48.450 "nvme_admin": false, 01:32:48.450 "nvme_io": false, 01:32:48.450 "nvme_io_md": false, 01:32:48.450 "write_zeroes": true, 01:32:48.450 "zcopy": true, 01:32:48.450 "get_zone_info": false, 01:32:48.450 "zone_management": false, 01:32:48.450 "zone_append": false, 01:32:48.450 "compare": false, 01:32:48.450 "compare_and_write": false, 01:32:48.450 "abort": true, 01:32:48.450 "seek_hole": false, 01:32:48.450 "seek_data": false, 01:32:48.450 "copy": true, 01:32:48.450 "nvme_iov_md": false 01:32:48.450 }, 01:32:48.450 "memory_domains": [ 01:32:48.450 { 01:32:48.450 "dma_device_id": "system", 01:32:48.450 "dma_device_type": 1 01:32:48.450 }, 01:32:48.450 { 01:32:48.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:48.450 "dma_device_type": 2 01:32:48.450 } 01:32:48.450 ], 01:32:48.450 "driver_specific": {} 01:32:48.450 } 01:32:48.450 ] 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:48.450 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:48.451 "name": "Existed_Raid", 01:32:48.451 "uuid": "67da8d31-49b9-4ced-92e3-ef5ccebadf5d", 01:32:48.451 "strip_size_kb": 0, 01:32:48.451 "state": "online", 01:32:48.451 "raid_level": "raid1", 01:32:48.451 "superblock": true, 01:32:48.451 "num_base_bdevs": 2, 01:32:48.451 "num_base_bdevs_discovered": 2, 01:32:48.451 "num_base_bdevs_operational": 2, 01:32:48.451 "base_bdevs_list": [ 01:32:48.451 { 01:32:48.451 "name": "BaseBdev1", 01:32:48.451 "uuid": "ff3c6338-cac5-48f0-bfc2-cdf64723f3c7", 01:32:48.451 "is_configured": true, 01:32:48.451 "data_offset": 256, 01:32:48.451 "data_size": 7936 01:32:48.451 }, 01:32:48.451 { 01:32:48.451 "name": "BaseBdev2", 01:32:48.451 "uuid": "34899f20-9fa4-4b2a-9c48-6378d9c5cd41", 01:32:48.451 "is_configured": true, 01:32:48.451 "data_offset": 256, 01:32:48.451 "data_size": 7936 01:32:48.451 } 01:32:48.451 ] 01:32:48.451 }' 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:48.451 05:27:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.017 [2024-12-09 05:27:40.457820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:32:49.017 "name": "Existed_Raid", 01:32:49.017 "aliases": [ 01:32:49.017 "67da8d31-49b9-4ced-92e3-ef5ccebadf5d" 01:32:49.017 ], 01:32:49.017 "product_name": "Raid Volume", 01:32:49.017 "block_size": 4096, 01:32:49.017 "num_blocks": 7936, 01:32:49.017 "uuid": "67da8d31-49b9-4ced-92e3-ef5ccebadf5d", 01:32:49.017 "md_size": 32, 01:32:49.017 "md_interleave": false, 01:32:49.017 "dif_type": 0, 01:32:49.017 "assigned_rate_limits": { 01:32:49.017 "rw_ios_per_sec": 0, 01:32:49.017 "rw_mbytes_per_sec": 0, 01:32:49.017 "r_mbytes_per_sec": 0, 01:32:49.017 "w_mbytes_per_sec": 0 01:32:49.017 }, 01:32:49.017 "claimed": false, 01:32:49.017 "zoned": false, 01:32:49.017 "supported_io_types": { 01:32:49.017 "read": true, 01:32:49.017 "write": true, 01:32:49.017 "unmap": false, 01:32:49.017 "flush": false, 01:32:49.017 "reset": true, 01:32:49.017 "nvme_admin": false, 01:32:49.017 "nvme_io": false, 01:32:49.017 "nvme_io_md": false, 01:32:49.017 "write_zeroes": true, 01:32:49.017 "zcopy": false, 01:32:49.017 "get_zone_info": false, 01:32:49.017 "zone_management": false, 01:32:49.017 "zone_append": false, 01:32:49.017 "compare": false, 01:32:49.017 "compare_and_write": false, 01:32:49.017 "abort": false, 01:32:49.017 "seek_hole": false, 01:32:49.017 "seek_data": false, 01:32:49.017 "copy": false, 01:32:49.017 "nvme_iov_md": false 01:32:49.017 }, 01:32:49.017 "memory_domains": [ 01:32:49.017 { 01:32:49.017 "dma_device_id": "system", 01:32:49.017 "dma_device_type": 1 01:32:49.017 }, 01:32:49.017 { 01:32:49.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:49.017 "dma_device_type": 2 01:32:49.017 }, 01:32:49.017 { 01:32:49.017 "dma_device_id": "system", 01:32:49.017 "dma_device_type": 1 01:32:49.017 }, 01:32:49.017 { 01:32:49.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:49.017 "dma_device_type": 2 01:32:49.017 } 01:32:49.017 ], 01:32:49.017 "driver_specific": { 01:32:49.017 "raid": { 01:32:49.017 "uuid": "67da8d31-49b9-4ced-92e3-ef5ccebadf5d", 01:32:49.017 "strip_size_kb": 0, 01:32:49.017 "state": "online", 01:32:49.017 "raid_level": "raid1", 01:32:49.017 "superblock": true, 01:32:49.017 "num_base_bdevs": 2, 01:32:49.017 "num_base_bdevs_discovered": 2, 01:32:49.017 "num_base_bdevs_operational": 2, 01:32:49.017 "base_bdevs_list": [ 01:32:49.017 { 01:32:49.017 "name": "BaseBdev1", 01:32:49.017 "uuid": "ff3c6338-cac5-48f0-bfc2-cdf64723f3c7", 01:32:49.017 "is_configured": true, 01:32:49.017 "data_offset": 256, 01:32:49.017 "data_size": 7936 01:32:49.017 }, 01:32:49.017 { 01:32:49.017 "name": "BaseBdev2", 01:32:49.017 "uuid": "34899f20-9fa4-4b2a-9c48-6378d9c5cd41", 01:32:49.017 "is_configured": true, 01:32:49.017 "data_offset": 256, 01:32:49.017 "data_size": 7936 01:32:49.017 } 01:32:49.017 ] 01:32:49.017 } 01:32:49.017 } 01:32:49.017 }' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:32:49.017 BaseBdev2' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.017 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.276 [2024-12-09 05:27:40.709511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:49.276 "name": "Existed_Raid", 01:32:49.276 "uuid": "67da8d31-49b9-4ced-92e3-ef5ccebadf5d", 01:32:49.276 "strip_size_kb": 0, 01:32:49.276 "state": "online", 01:32:49.276 "raid_level": "raid1", 01:32:49.276 "superblock": true, 01:32:49.276 "num_base_bdevs": 2, 01:32:49.276 "num_base_bdevs_discovered": 1, 01:32:49.276 "num_base_bdevs_operational": 1, 01:32:49.276 "base_bdevs_list": [ 01:32:49.276 { 01:32:49.276 "name": null, 01:32:49.276 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:49.276 "is_configured": false, 01:32:49.276 "data_offset": 0, 01:32:49.276 "data_size": 7936 01:32:49.276 }, 01:32:49.276 { 01:32:49.276 "name": "BaseBdev2", 01:32:49.276 "uuid": "34899f20-9fa4-4b2a-9c48-6378d9c5cd41", 01:32:49.276 "is_configured": true, 01:32:49.276 "data_offset": 256, 01:32:49.276 "data_size": 7936 01:32:49.276 } 01:32:49.276 ] 01:32:49.276 }' 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:49.276 05:27:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.841 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:49.842 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:49.842 [2024-12-09 05:27:41.376125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:32:49.842 [2024-12-09 05:27:41.376469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:50.101 [2024-12-09 05:27:41.464455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:50.101 [2024-12-09 05:27:41.464735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:50.101 [2024-12-09 05:27:41.464767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87574 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87574 ']' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87574 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87574 01:32:50.101 killing process with pid 87574 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87574' 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87574 01:32:50.101 [2024-12-09 05:27:41.558202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:32:50.101 05:27:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87574 01:32:50.101 [2024-12-09 05:27:41.573663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:32:51.476 05:27:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 01:32:51.476 01:32:51.476 real 0m5.609s 01:32:51.476 user 0m8.453s 01:32:51.476 sys 0m0.793s 01:32:51.476 ************************************ 01:32:51.476 END TEST raid_state_function_test_sb_md_separate 01:32:51.476 ************************************ 01:32:51.476 05:27:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:51.476 05:27:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:51.476 05:27:42 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 01:32:51.476 05:27:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:51.476 05:27:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:51.476 05:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:32:51.476 ************************************ 01:32:51.476 START TEST raid_superblock_test_md_separate 01:32:51.476 ************************************ 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 01:32:51.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87833 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87833 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87833 ']' 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:51.476 05:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:51.476 [2024-12-09 05:27:42.849201] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:51.476 [2024-12-09 05:27:42.849749] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87833 ] 01:32:51.476 [2024-12-09 05:27:43.031762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:51.735 [2024-12-09 05:27:43.144987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:51.735 [2024-12-09 05:27:43.344412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:51.735 [2024-12-09 05:27:43.344457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.301 malloc1 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.301 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.301 [2024-12-09 05:27:43.820847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:32:52.301 [2024-12-09 05:27:43.821098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:52.301 [2024-12-09 05:27:43.821143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:32:52.301 [2024-12-09 05:27:43.821159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:52.302 [2024-12-09 05:27:43.823695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:52.302 [2024-12-09 05:27:43.823738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:32:52.302 pt1 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.302 malloc2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.302 [2024-12-09 05:27:43.872057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:32:52.302 [2024-12-09 05:27:43.872118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:52.302 [2024-12-09 05:27:43.872162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:32:52.302 [2024-12-09 05:27:43.872176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:52.302 [2024-12-09 05:27:43.875259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:52.302 [2024-12-09 05:27:43.875302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:32:52.302 pt2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.302 [2024-12-09 05:27:43.884165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:32:52.302 [2024-12-09 05:27:43.886663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:52.302 [2024-12-09 05:27:43.886883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:32:52.302 [2024-12-09 05:27:43.886903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:52.302 [2024-12-09 05:27:43.887005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:32:52.302 [2024-12-09 05:27:43.887150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:32:52.302 [2024-12-09 05:27:43.887168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:32:52.302 [2024-12-09 05:27:43.887282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.302 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.576 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:52.576 "name": "raid_bdev1", 01:32:52.576 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:52.576 "strip_size_kb": 0, 01:32:52.576 "state": "online", 01:32:52.576 "raid_level": "raid1", 01:32:52.576 "superblock": true, 01:32:52.576 "num_base_bdevs": 2, 01:32:52.576 "num_base_bdevs_discovered": 2, 01:32:52.576 "num_base_bdevs_operational": 2, 01:32:52.576 "base_bdevs_list": [ 01:32:52.576 { 01:32:52.576 "name": "pt1", 01:32:52.576 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:52.576 "is_configured": true, 01:32:52.576 "data_offset": 256, 01:32:52.576 "data_size": 7936 01:32:52.576 }, 01:32:52.576 { 01:32:52.576 "name": "pt2", 01:32:52.576 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:52.576 "is_configured": true, 01:32:52.576 "data_offset": 256, 01:32:52.576 "data_size": 7936 01:32:52.576 } 01:32:52.576 ] 01:32:52.576 }' 01:32:52.576 05:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:52.576 05:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:52.835 [2024-12-09 05:27:44.384750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:52.835 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:32:52.835 "name": "raid_bdev1", 01:32:52.835 "aliases": [ 01:32:52.835 "152c1d5b-eb9d-4666-874f-25ee82fda861" 01:32:52.835 ], 01:32:52.835 "product_name": "Raid Volume", 01:32:52.835 "block_size": 4096, 01:32:52.835 "num_blocks": 7936, 01:32:52.835 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:52.835 "md_size": 32, 01:32:52.835 "md_interleave": false, 01:32:52.835 "dif_type": 0, 01:32:52.835 "assigned_rate_limits": { 01:32:52.835 "rw_ios_per_sec": 0, 01:32:52.835 "rw_mbytes_per_sec": 0, 01:32:52.835 "r_mbytes_per_sec": 0, 01:32:52.836 "w_mbytes_per_sec": 0 01:32:52.836 }, 01:32:52.836 "claimed": false, 01:32:52.836 "zoned": false, 01:32:52.836 "supported_io_types": { 01:32:52.836 "read": true, 01:32:52.836 "write": true, 01:32:52.836 "unmap": false, 01:32:52.836 "flush": false, 01:32:52.836 "reset": true, 01:32:52.836 "nvme_admin": false, 01:32:52.836 "nvme_io": false, 01:32:52.836 "nvme_io_md": false, 01:32:52.836 "write_zeroes": true, 01:32:52.836 "zcopy": false, 01:32:52.836 "get_zone_info": false, 01:32:52.836 "zone_management": false, 01:32:52.836 "zone_append": false, 01:32:52.836 "compare": false, 01:32:52.836 "compare_and_write": false, 01:32:52.836 "abort": false, 01:32:52.836 "seek_hole": false, 01:32:52.836 "seek_data": false, 01:32:52.836 "copy": false, 01:32:52.836 "nvme_iov_md": false 01:32:52.836 }, 01:32:52.836 "memory_domains": [ 01:32:52.836 { 01:32:52.836 "dma_device_id": "system", 01:32:52.836 "dma_device_type": 1 01:32:52.836 }, 01:32:52.836 { 01:32:52.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:52.836 "dma_device_type": 2 01:32:52.836 }, 01:32:52.836 { 01:32:52.836 "dma_device_id": "system", 01:32:52.836 "dma_device_type": 1 01:32:52.836 }, 01:32:52.836 { 01:32:52.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:52.836 "dma_device_type": 2 01:32:52.836 } 01:32:52.836 ], 01:32:52.836 "driver_specific": { 01:32:52.836 "raid": { 01:32:52.836 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:52.836 "strip_size_kb": 0, 01:32:52.836 "state": "online", 01:32:52.836 "raid_level": "raid1", 01:32:52.836 "superblock": true, 01:32:52.836 "num_base_bdevs": 2, 01:32:52.836 "num_base_bdevs_discovered": 2, 01:32:52.836 "num_base_bdevs_operational": 2, 01:32:52.836 "base_bdevs_list": [ 01:32:52.836 { 01:32:52.836 "name": "pt1", 01:32:52.836 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:52.836 "is_configured": true, 01:32:52.836 "data_offset": 256, 01:32:52.836 "data_size": 7936 01:32:52.836 }, 01:32:52.836 { 01:32:52.836 "name": "pt2", 01:32:52.836 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:52.836 "is_configured": true, 01:32:52.836 "data_offset": 256, 01:32:52.836 "data_size": 7936 01:32:52.836 } 01:32:52.836 ] 01:32:52.836 } 01:32:52.836 } 01:32:52.836 }' 01:32:52.836 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:32:53.095 pt2' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.095 [2024-12-09 05:27:44.644666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=152c1d5b-eb9d-4666-874f-25ee82fda861 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 152c1d5b-eb9d-4666-874f-25ee82fda861 ']' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.095 [2024-12-09 05:27:44.692314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:53.095 [2024-12-09 05:27:44.692520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:53.095 [2024-12-09 05:27:44.692769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:53.095 [2024-12-09 05:27:44.692948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:53.095 [2024-12-09 05:27:44.693092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:32:53.095 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 [2024-12-09 05:27:44.836434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:32:53.355 [2024-12-09 05:27:44.839064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:32:53.355 [2024-12-09 05:27:44.839175] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:32:53.355 [2024-12-09 05:27:44.839246] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:32:53.355 [2024-12-09 05:27:44.839270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:53.355 [2024-12-09 05:27:44.839285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:32:53.355 request: 01:32:53.355 { 01:32:53.355 "name": "raid_bdev1", 01:32:53.355 "raid_level": "raid1", 01:32:53.355 "base_bdevs": [ 01:32:53.355 "malloc1", 01:32:53.355 "malloc2" 01:32:53.355 ], 01:32:53.355 "superblock": false, 01:32:53.355 "method": "bdev_raid_create", 01:32:53.355 "req_id": 1 01:32:53.355 } 01:32:53.355 Got JSON-RPC error response 01:32:53.355 response: 01:32:53.355 { 01:32:53.355 "code": -17, 01:32:53.355 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:32:53.355 } 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 [2024-12-09 05:27:44.904380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:32:53.355 [2024-12-09 05:27:44.904448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:53.355 [2024-12-09 05:27:44.904479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:32:53.355 [2024-12-09 05:27:44.904494] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:53.355 [2024-12-09 05:27:44.906995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:53.355 [2024-12-09 05:27:44.907039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:32:53.355 [2024-12-09 05:27:44.907091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:32:53.355 [2024-12-09 05:27:44.907169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:32:53.355 pt1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.355 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:53.355 "name": "raid_bdev1", 01:32:53.355 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:53.355 "strip_size_kb": 0, 01:32:53.355 "state": "configuring", 01:32:53.355 "raid_level": "raid1", 01:32:53.356 "superblock": true, 01:32:53.356 "num_base_bdevs": 2, 01:32:53.356 "num_base_bdevs_discovered": 1, 01:32:53.356 "num_base_bdevs_operational": 2, 01:32:53.356 "base_bdevs_list": [ 01:32:53.356 { 01:32:53.356 "name": "pt1", 01:32:53.356 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:53.356 "is_configured": true, 01:32:53.356 "data_offset": 256, 01:32:53.356 "data_size": 7936 01:32:53.356 }, 01:32:53.356 { 01:32:53.356 "name": null, 01:32:53.356 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:53.356 "is_configured": false, 01:32:53.356 "data_offset": 256, 01:32:53.356 "data_size": 7936 01:32:53.356 } 01:32:53.356 ] 01:32:53.356 }' 01:32:53.356 05:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:53.356 05:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.930 [2024-12-09 05:27:45.424524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:32:53.930 [2024-12-09 05:27:45.424620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:53.930 [2024-12-09 05:27:45.424646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:32:53.930 [2024-12-09 05:27:45.424662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:53.930 [2024-12-09 05:27:45.424893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:53.930 [2024-12-09 05:27:45.424920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:32:53.930 [2024-12-09 05:27:45.424982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:32:53.930 [2024-12-09 05:27:45.425009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:53.930 [2024-12-09 05:27:45.425125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:32:53.930 [2024-12-09 05:27:45.425143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:53.930 [2024-12-09 05:27:45.425223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:32:53.930 [2024-12-09 05:27:45.425354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:32:53.930 [2024-12-09 05:27:45.425386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:32:53.930 [2024-12-09 05:27:45.425523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:53.930 pt2 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:53.930 "name": "raid_bdev1", 01:32:53.930 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:53.930 "strip_size_kb": 0, 01:32:53.930 "state": "online", 01:32:53.930 "raid_level": "raid1", 01:32:53.930 "superblock": true, 01:32:53.930 "num_base_bdevs": 2, 01:32:53.930 "num_base_bdevs_discovered": 2, 01:32:53.930 "num_base_bdevs_operational": 2, 01:32:53.930 "base_bdevs_list": [ 01:32:53.930 { 01:32:53.930 "name": "pt1", 01:32:53.930 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:53.930 "is_configured": true, 01:32:53.930 "data_offset": 256, 01:32:53.930 "data_size": 7936 01:32:53.930 }, 01:32:53.930 { 01:32:53.930 "name": "pt2", 01:32:53.930 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:53.930 "is_configured": true, 01:32:53.930 "data_offset": 256, 01:32:53.930 "data_size": 7936 01:32:53.930 } 01:32:53.930 ] 01:32:53.930 }' 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:53.930 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:32:54.496 [2024-12-09 05:27:45.969101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:54.496 05:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:54.496 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:32:54.496 "name": "raid_bdev1", 01:32:54.496 "aliases": [ 01:32:54.496 "152c1d5b-eb9d-4666-874f-25ee82fda861" 01:32:54.496 ], 01:32:54.496 "product_name": "Raid Volume", 01:32:54.496 "block_size": 4096, 01:32:54.496 "num_blocks": 7936, 01:32:54.496 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:54.496 "md_size": 32, 01:32:54.496 "md_interleave": false, 01:32:54.496 "dif_type": 0, 01:32:54.496 "assigned_rate_limits": { 01:32:54.496 "rw_ios_per_sec": 0, 01:32:54.496 "rw_mbytes_per_sec": 0, 01:32:54.496 "r_mbytes_per_sec": 0, 01:32:54.496 "w_mbytes_per_sec": 0 01:32:54.496 }, 01:32:54.496 "claimed": false, 01:32:54.496 "zoned": false, 01:32:54.496 "supported_io_types": { 01:32:54.496 "read": true, 01:32:54.496 "write": true, 01:32:54.496 "unmap": false, 01:32:54.496 "flush": false, 01:32:54.496 "reset": true, 01:32:54.496 "nvme_admin": false, 01:32:54.496 "nvme_io": false, 01:32:54.496 "nvme_io_md": false, 01:32:54.496 "write_zeroes": true, 01:32:54.496 "zcopy": false, 01:32:54.496 "get_zone_info": false, 01:32:54.496 "zone_management": false, 01:32:54.496 "zone_append": false, 01:32:54.496 "compare": false, 01:32:54.496 "compare_and_write": false, 01:32:54.496 "abort": false, 01:32:54.496 "seek_hole": false, 01:32:54.496 "seek_data": false, 01:32:54.496 "copy": false, 01:32:54.496 "nvme_iov_md": false 01:32:54.496 }, 01:32:54.496 "memory_domains": [ 01:32:54.496 { 01:32:54.496 "dma_device_id": "system", 01:32:54.496 "dma_device_type": 1 01:32:54.496 }, 01:32:54.496 { 01:32:54.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:54.496 "dma_device_type": 2 01:32:54.496 }, 01:32:54.496 { 01:32:54.496 "dma_device_id": "system", 01:32:54.496 "dma_device_type": 1 01:32:54.496 }, 01:32:54.496 { 01:32:54.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:32:54.496 "dma_device_type": 2 01:32:54.496 } 01:32:54.496 ], 01:32:54.496 "driver_specific": { 01:32:54.496 "raid": { 01:32:54.496 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:54.496 "strip_size_kb": 0, 01:32:54.496 "state": "online", 01:32:54.496 "raid_level": "raid1", 01:32:54.496 "superblock": true, 01:32:54.496 "num_base_bdevs": 2, 01:32:54.496 "num_base_bdevs_discovered": 2, 01:32:54.496 "num_base_bdevs_operational": 2, 01:32:54.496 "base_bdevs_list": [ 01:32:54.496 { 01:32:54.496 "name": "pt1", 01:32:54.496 "uuid": "00000000-0000-0000-0000-000000000001", 01:32:54.496 "is_configured": true, 01:32:54.496 "data_offset": 256, 01:32:54.496 "data_size": 7936 01:32:54.496 }, 01:32:54.496 { 01:32:54.496 "name": "pt2", 01:32:54.496 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:54.496 "is_configured": true, 01:32:54.496 "data_offset": 256, 01:32:54.496 "data_size": 7936 01:32:54.496 } 01:32:54.496 ] 01:32:54.496 } 01:32:54.496 } 01:32:54.496 }' 01:32:54.496 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:32:54.496 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:32:54.496 pt2' 01:32:54.496 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.754 [2024-12-09 05:27:46.253163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 152c1d5b-eb9d-4666-874f-25ee82fda861 '!=' 152c1d5b-eb9d-4666-874f-25ee82fda861 ']' 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 01:32:54.754 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.755 [2024-12-09 05:27:46.292936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:54.755 "name": "raid_bdev1", 01:32:54.755 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:54.755 "strip_size_kb": 0, 01:32:54.755 "state": "online", 01:32:54.755 "raid_level": "raid1", 01:32:54.755 "superblock": true, 01:32:54.755 "num_base_bdevs": 2, 01:32:54.755 "num_base_bdevs_discovered": 1, 01:32:54.755 "num_base_bdevs_operational": 1, 01:32:54.755 "base_bdevs_list": [ 01:32:54.755 { 01:32:54.755 "name": null, 01:32:54.755 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:54.755 "is_configured": false, 01:32:54.755 "data_offset": 0, 01:32:54.755 "data_size": 7936 01:32:54.755 }, 01:32:54.755 { 01:32:54.755 "name": "pt2", 01:32:54.755 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:54.755 "is_configured": true, 01:32:54.755 "data_offset": 256, 01:32:54.755 "data_size": 7936 01:32:54.755 } 01:32:54.755 ] 01:32:54.755 }' 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:54.755 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.366 [2024-12-09 05:27:46.797042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:55.366 [2024-12-09 05:27:46.797088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:55.366 [2024-12-09 05:27:46.797210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:55.366 [2024-12-09 05:27:46.797274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:55.366 [2024-12-09 05:27:46.797292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.366 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.367 [2024-12-09 05:27:46.865060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:32:55.367 [2024-12-09 05:27:46.865292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:55.367 [2024-12-09 05:27:46.865326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:32:55.367 [2024-12-09 05:27:46.865344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:55.367 [2024-12-09 05:27:46.868474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:55.367 [2024-12-09 05:27:46.868645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:32:55.367 [2024-12-09 05:27:46.868732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:32:55.367 [2024-12-09 05:27:46.868815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:55.367 [2024-12-09 05:27:46.868958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:32:55.367 [2024-12-09 05:27:46.868979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:55.367 [2024-12-09 05:27:46.869080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:32:55.367 [2024-12-09 05:27:46.869241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:32:55.367 [2024-12-09 05:27:46.869255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:32:55.367 [2024-12-09 05:27:46.869445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:55.367 pt2 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:55.367 "name": "raid_bdev1", 01:32:55.367 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:55.367 "strip_size_kb": 0, 01:32:55.367 "state": "online", 01:32:55.367 "raid_level": "raid1", 01:32:55.367 "superblock": true, 01:32:55.367 "num_base_bdevs": 2, 01:32:55.367 "num_base_bdevs_discovered": 1, 01:32:55.367 "num_base_bdevs_operational": 1, 01:32:55.367 "base_bdevs_list": [ 01:32:55.367 { 01:32:55.367 "name": null, 01:32:55.367 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:55.367 "is_configured": false, 01:32:55.367 "data_offset": 256, 01:32:55.367 "data_size": 7936 01:32:55.367 }, 01:32:55.367 { 01:32:55.367 "name": "pt2", 01:32:55.367 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:55.367 "is_configured": true, 01:32:55.367 "data_offset": 256, 01:32:55.367 "data_size": 7936 01:32:55.367 } 01:32:55.367 ] 01:32:55.367 }' 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:55.367 05:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.948 [2024-12-09 05:27:47.357144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:55.948 [2024-12-09 05:27:47.357175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:32:55.948 [2024-12-09 05:27:47.357252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:55.948 [2024-12-09 05:27:47.357315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:55.948 [2024-12-09 05:27:47.357330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.948 [2024-12-09 05:27:47.421176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:32:55.948 [2024-12-09 05:27:47.421232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:55.948 [2024-12-09 05:27:47.421259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 01:32:55.948 [2024-12-09 05:27:47.421272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:55.948 [2024-12-09 05:27:47.423965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:55.948 [2024-12-09 05:27:47.424006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:32:55.948 [2024-12-09 05:27:47.424073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:32:55.948 [2024-12-09 05:27:47.424155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:32:55.948 [2024-12-09 05:27:47.424331] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:32:55.948 [2024-12-09 05:27:47.424347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:32:55.948 [2024-12-09 05:27:47.424382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:32:55.948 [2024-12-09 05:27:47.424480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:32:55.948 [2024-12-09 05:27:47.424587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:32:55.948 [2024-12-09 05:27:47.424601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:55.948 [2024-12-09 05:27:47.424671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:32:55.948 [2024-12-09 05:27:47.424828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:32:55.948 [2024-12-09 05:27:47.424845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:32:55.948 [2024-12-09 05:27:47.424955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:55.948 pt1 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:55.948 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:55.948 "name": "raid_bdev1", 01:32:55.948 "uuid": "152c1d5b-eb9d-4666-874f-25ee82fda861", 01:32:55.948 "strip_size_kb": 0, 01:32:55.948 "state": "online", 01:32:55.948 "raid_level": "raid1", 01:32:55.948 "superblock": true, 01:32:55.948 "num_base_bdevs": 2, 01:32:55.948 "num_base_bdevs_discovered": 1, 01:32:55.948 "num_base_bdevs_operational": 1, 01:32:55.948 "base_bdevs_list": [ 01:32:55.948 { 01:32:55.948 "name": null, 01:32:55.948 "uuid": "00000000-0000-0000-0000-000000000000", 01:32:55.948 "is_configured": false, 01:32:55.949 "data_offset": 256, 01:32:55.949 "data_size": 7936 01:32:55.949 }, 01:32:55.949 { 01:32:55.949 "name": "pt2", 01:32:55.949 "uuid": "00000000-0000-0000-0000-000000000002", 01:32:55.949 "is_configured": true, 01:32:55.949 "data_offset": 256, 01:32:55.949 "data_size": 7936 01:32:55.949 } 01:32:55.949 ] 01:32:55.949 }' 01:32:55.949 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:55.949 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:56.514 [2024-12-09 05:27:47.973728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:56.514 05:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 152c1d5b-eb9d-4666-874f-25ee82fda861 '!=' 152c1d5b-eb9d-4666-874f-25ee82fda861 ']' 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87833 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87833 ']' 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87833 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87833 01:32:56.514 killing process with pid 87833 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87833' 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87833 01:32:56.514 [2024-12-09 05:27:48.033165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:32:56.514 [2024-12-09 05:27:48.033252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:32:56.514 05:27:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87833 01:32:56.514 [2024-12-09 05:27:48.033327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:32:56.514 [2024-12-09 05:27:48.033352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:32:56.773 [2024-12-09 05:27:48.227128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:32:58.146 ************************************ 01:32:58.146 END TEST raid_superblock_test_md_separate 01:32:58.146 ************************************ 01:32:58.146 05:27:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 01:32:58.146 01:32:58.146 real 0m6.602s 01:32:58.146 user 0m10.351s 01:32:58.146 sys 0m0.969s 01:32:58.147 05:27:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:58.147 05:27:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:58.147 05:27:49 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 01:32:58.147 05:27:49 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 01:32:58.147 05:27:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:32:58.147 05:27:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:58.147 05:27:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:32:58.147 ************************************ 01:32:58.147 START TEST raid_rebuild_test_sb_md_separate 01:32:58.147 ************************************ 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:32:58.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88160 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88160 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88160 ']' 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:58.147 05:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:58.147 [2024-12-09 05:27:49.519448] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:58.147 [2024-12-09 05:27:49.519859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88160 ] 01:32:58.147 I/O size of 3145728 is greater than zero copy threshold (65536). 01:32:58.147 Zero copy mechanism will not be used. 01:32:58.147 [2024-12-09 05:27:49.710253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:32:58.405 [2024-12-09 05:27:49.843119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:58.664 [2024-12-09 05:27:50.050314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:58.664 [2024-12-09 05:27:50.050646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:58.922 BaseBdev1_malloc 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:58.922 [2024-12-09 05:27:50.497515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:32:58.922 [2024-12-09 05:27:50.497601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:58.922 [2024-12-09 05:27:50.497633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:32:58.922 [2024-12-09 05:27:50.497651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:58.922 [2024-12-09 05:27:50.500275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:58.922 [2024-12-09 05:27:50.500337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:32:58.922 BaseBdev1 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:58.922 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 BaseBdev2_malloc 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 [2024-12-09 05:27:50.556133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:32:59.181 [2024-12-09 05:27:50.556413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:59.181 [2024-12-09 05:27:50.556451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:32:59.181 [2024-12-09 05:27:50.556469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:59.181 [2024-12-09 05:27:50.559000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:59.181 [2024-12-09 05:27:50.559045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:32:59.181 BaseBdev2 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 spare_malloc 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 spare_delay 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 [2024-12-09 05:27:50.628477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:32:59.181 [2024-12-09 05:27:50.628552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:32:59.181 [2024-12-09 05:27:50.628598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:32:59.181 [2024-12-09 05:27:50.628617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:32:59.181 [2024-12-09 05:27:50.631235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:32:59.181 [2024-12-09 05:27:50.631499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:32:59.181 spare 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 [2024-12-09 05:27:50.640526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:32:59.181 [2024-12-09 05:27:50.643237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:32:59.181 [2024-12-09 05:27:50.643683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:32:59.181 [2024-12-09 05:27:50.643714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:32:59.181 [2024-12-09 05:27:50.643833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:32:59.181 [2024-12-09 05:27:50.644067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:32:59.181 [2024-12-09 05:27:50.644087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:32:59.181 [2024-12-09 05:27:50.644226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:32:59.181 "name": "raid_bdev1", 01:32:59.181 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:32:59.181 "strip_size_kb": 0, 01:32:59.181 "state": "online", 01:32:59.181 "raid_level": "raid1", 01:32:59.181 "superblock": true, 01:32:59.181 "num_base_bdevs": 2, 01:32:59.181 "num_base_bdevs_discovered": 2, 01:32:59.181 "num_base_bdevs_operational": 2, 01:32:59.181 "base_bdevs_list": [ 01:32:59.181 { 01:32:59.181 "name": "BaseBdev1", 01:32:59.181 "uuid": "356186e1-ceca-5374-8d3f-01092aca366a", 01:32:59.181 "is_configured": true, 01:32:59.181 "data_offset": 256, 01:32:59.181 "data_size": 7936 01:32:59.181 }, 01:32:59.181 { 01:32:59.181 "name": "BaseBdev2", 01:32:59.181 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:32:59.181 "is_configured": true, 01:32:59.181 "data_offset": 256, 01:32:59.181 "data_size": 7936 01:32:59.181 } 01:32:59.181 ] 01:32:59.181 }' 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:32:59.181 05:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:32:59.747 [2024-12-09 05:27:51.169124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:32:59.747 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:32:59.748 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 01:33:00.006 [2024-12-09 05:27:51.532940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:33:00.006 /dev/nbd0 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:33:00.006 1+0 records in 01:33:00.006 1+0 records out 01:33:00.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297749 s, 13.8 MB/s 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 01:33:00.006 05:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 01:33:00.943 7936+0 records in 01:33:00.943 7936+0 records out 01:33:00.943 32505856 bytes (33 MB, 31 MiB) copied, 0.906837 s, 35.8 MB/s 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:33:00.943 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:33:01.202 [2024-12-09 05:27:52.793208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:01.202 [2024-12-09 05:27:52.809332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:01.202 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:01.461 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:01.461 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:01.461 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:01.461 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:01.461 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:01.461 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:01.461 "name": "raid_bdev1", 01:33:01.461 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:01.461 "strip_size_kb": 0, 01:33:01.461 "state": "online", 01:33:01.462 "raid_level": "raid1", 01:33:01.462 "superblock": true, 01:33:01.462 "num_base_bdevs": 2, 01:33:01.462 "num_base_bdevs_discovered": 1, 01:33:01.462 "num_base_bdevs_operational": 1, 01:33:01.462 "base_bdevs_list": [ 01:33:01.462 { 01:33:01.462 "name": null, 01:33:01.462 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:01.462 "is_configured": false, 01:33:01.462 "data_offset": 0, 01:33:01.462 "data_size": 7936 01:33:01.462 }, 01:33:01.462 { 01:33:01.462 "name": "BaseBdev2", 01:33:01.462 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:01.462 "is_configured": true, 01:33:01.462 "data_offset": 256, 01:33:01.462 "data_size": 7936 01:33:01.462 } 01:33:01.462 ] 01:33:01.462 }' 01:33:01.462 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:01.462 05:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:01.721 05:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:33:01.721 05:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:01.721 05:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:01.721 [2024-12-09 05:27:53.333530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:01.980 [2024-12-09 05:27:53.346633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 01:33:01.980 05:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:01.980 05:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 01:33:01.980 [2024-12-09 05:27:53.349327] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:02.917 "name": "raid_bdev1", 01:33:02.917 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:02.917 "strip_size_kb": 0, 01:33:02.917 "state": "online", 01:33:02.917 "raid_level": "raid1", 01:33:02.917 "superblock": true, 01:33:02.917 "num_base_bdevs": 2, 01:33:02.917 "num_base_bdevs_discovered": 2, 01:33:02.917 "num_base_bdevs_operational": 2, 01:33:02.917 "process": { 01:33:02.917 "type": "rebuild", 01:33:02.917 "target": "spare", 01:33:02.917 "progress": { 01:33:02.917 "blocks": 2560, 01:33:02.917 "percent": 32 01:33:02.917 } 01:33:02.917 }, 01:33:02.917 "base_bdevs_list": [ 01:33:02.917 { 01:33:02.917 "name": "spare", 01:33:02.917 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:02.917 "is_configured": true, 01:33:02.917 "data_offset": 256, 01:33:02.917 "data_size": 7936 01:33:02.917 }, 01:33:02.917 { 01:33:02.917 "name": "BaseBdev2", 01:33:02.917 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:02.917 "is_configured": true, 01:33:02.917 "data_offset": 256, 01:33:02.917 "data_size": 7936 01:33:02.917 } 01:33:02.917 ] 01:33:02.917 }' 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:02.917 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:02.917 [2024-12-09 05:27:54.522938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:03.176 [2024-12-09 05:27:54.558088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:33:03.176 [2024-12-09 05:27:54.558231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:03.176 [2024-12-09 05:27:54.558254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:03.176 [2024-12-09 05:27:54.558271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:03.176 "name": "raid_bdev1", 01:33:03.176 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:03.176 "strip_size_kb": 0, 01:33:03.176 "state": "online", 01:33:03.176 "raid_level": "raid1", 01:33:03.176 "superblock": true, 01:33:03.176 "num_base_bdevs": 2, 01:33:03.176 "num_base_bdevs_discovered": 1, 01:33:03.176 "num_base_bdevs_operational": 1, 01:33:03.176 "base_bdevs_list": [ 01:33:03.176 { 01:33:03.176 "name": null, 01:33:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:03.176 "is_configured": false, 01:33:03.176 "data_offset": 0, 01:33:03.176 "data_size": 7936 01:33:03.176 }, 01:33:03.176 { 01:33:03.176 "name": "BaseBdev2", 01:33:03.176 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:03.176 "is_configured": true, 01:33:03.176 "data_offset": 256, 01:33:03.176 "data_size": 7936 01:33:03.176 } 01:33:03.176 ] 01:33:03.176 }' 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:03.176 05:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:03.475 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:03.732 "name": "raid_bdev1", 01:33:03.732 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:03.732 "strip_size_kb": 0, 01:33:03.732 "state": "online", 01:33:03.732 "raid_level": "raid1", 01:33:03.732 "superblock": true, 01:33:03.732 "num_base_bdevs": 2, 01:33:03.732 "num_base_bdevs_discovered": 1, 01:33:03.732 "num_base_bdevs_operational": 1, 01:33:03.732 "base_bdevs_list": [ 01:33:03.732 { 01:33:03.732 "name": null, 01:33:03.732 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:03.732 "is_configured": false, 01:33:03.732 "data_offset": 0, 01:33:03.732 "data_size": 7936 01:33:03.732 }, 01:33:03.732 { 01:33:03.732 "name": "BaseBdev2", 01:33:03.732 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:03.732 "is_configured": true, 01:33:03.732 "data_offset": 256, 01:33:03.732 "data_size": 7936 01:33:03.732 } 01:33:03.732 ] 01:33:03.732 }' 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:03.732 [2024-12-09 05:27:55.244256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:03.732 [2024-12-09 05:27:55.256451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:03.732 05:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 01:33:03.732 [2024-12-09 05:27:55.258931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:04.666 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:04.924 "name": "raid_bdev1", 01:33:04.924 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:04.924 "strip_size_kb": 0, 01:33:04.924 "state": "online", 01:33:04.924 "raid_level": "raid1", 01:33:04.924 "superblock": true, 01:33:04.924 "num_base_bdevs": 2, 01:33:04.924 "num_base_bdevs_discovered": 2, 01:33:04.924 "num_base_bdevs_operational": 2, 01:33:04.924 "process": { 01:33:04.924 "type": "rebuild", 01:33:04.924 "target": "spare", 01:33:04.924 "progress": { 01:33:04.924 "blocks": 2560, 01:33:04.924 "percent": 32 01:33:04.924 } 01:33:04.924 }, 01:33:04.924 "base_bdevs_list": [ 01:33:04.924 { 01:33:04.924 "name": "spare", 01:33:04.924 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:04.924 "is_configured": true, 01:33:04.924 "data_offset": 256, 01:33:04.924 "data_size": 7936 01:33:04.924 }, 01:33:04.924 { 01:33:04.924 "name": "BaseBdev2", 01:33:04.924 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:04.924 "is_configured": true, 01:33:04.924 "data_offset": 256, 01:33:04.924 "data_size": 7936 01:33:04.924 } 01:33:04.924 ] 01:33:04.924 }' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:33:04.924 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=778 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:04.924 "name": "raid_bdev1", 01:33:04.924 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:04.924 "strip_size_kb": 0, 01:33:04.924 "state": "online", 01:33:04.924 "raid_level": "raid1", 01:33:04.924 "superblock": true, 01:33:04.924 "num_base_bdevs": 2, 01:33:04.924 "num_base_bdevs_discovered": 2, 01:33:04.924 "num_base_bdevs_operational": 2, 01:33:04.924 "process": { 01:33:04.924 "type": "rebuild", 01:33:04.924 "target": "spare", 01:33:04.924 "progress": { 01:33:04.924 "blocks": 2816, 01:33:04.924 "percent": 35 01:33:04.924 } 01:33:04.924 }, 01:33:04.924 "base_bdevs_list": [ 01:33:04.924 { 01:33:04.924 "name": "spare", 01:33:04.924 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:04.924 "is_configured": true, 01:33:04.924 "data_offset": 256, 01:33:04.924 "data_size": 7936 01:33:04.924 }, 01:33:04.924 { 01:33:04.924 "name": "BaseBdev2", 01:33:04.924 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:04.924 "is_configured": true, 01:33:04.924 "data_offset": 256, 01:33:04.924 "data_size": 7936 01:33:04.924 } 01:33:04.924 ] 01:33:04.924 }' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:04.924 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:05.182 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:05.182 05:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:06.116 "name": "raid_bdev1", 01:33:06.116 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:06.116 "strip_size_kb": 0, 01:33:06.116 "state": "online", 01:33:06.116 "raid_level": "raid1", 01:33:06.116 "superblock": true, 01:33:06.116 "num_base_bdevs": 2, 01:33:06.116 "num_base_bdevs_discovered": 2, 01:33:06.116 "num_base_bdevs_operational": 2, 01:33:06.116 "process": { 01:33:06.116 "type": "rebuild", 01:33:06.116 "target": "spare", 01:33:06.116 "progress": { 01:33:06.116 "blocks": 5888, 01:33:06.116 "percent": 74 01:33:06.116 } 01:33:06.116 }, 01:33:06.116 "base_bdevs_list": [ 01:33:06.116 { 01:33:06.116 "name": "spare", 01:33:06.116 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:06.116 "is_configured": true, 01:33:06.116 "data_offset": 256, 01:33:06.116 "data_size": 7936 01:33:06.116 }, 01:33:06.116 { 01:33:06.116 "name": "BaseBdev2", 01:33:06.116 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:06.116 "is_configured": true, 01:33:06.116 "data_offset": 256, 01:33:06.116 "data_size": 7936 01:33:06.116 } 01:33:06.116 ] 01:33:06.116 }' 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:06.116 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:06.374 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:06.374 05:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 01:33:06.940 [2024-12-09 05:27:58.381042] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:33:06.940 [2024-12-09 05:27:58.381344] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:33:06.940 [2024-12-09 05:27:58.381523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:07.197 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:07.198 "name": "raid_bdev1", 01:33:07.198 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:07.198 "strip_size_kb": 0, 01:33:07.198 "state": "online", 01:33:07.198 "raid_level": "raid1", 01:33:07.198 "superblock": true, 01:33:07.198 "num_base_bdevs": 2, 01:33:07.198 "num_base_bdevs_discovered": 2, 01:33:07.198 "num_base_bdevs_operational": 2, 01:33:07.198 "base_bdevs_list": [ 01:33:07.198 { 01:33:07.198 "name": "spare", 01:33:07.198 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:07.198 "is_configured": true, 01:33:07.198 "data_offset": 256, 01:33:07.198 "data_size": 7936 01:33:07.198 }, 01:33:07.198 { 01:33:07.198 "name": "BaseBdev2", 01:33:07.198 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:07.198 "is_configured": true, 01:33:07.198 "data_offset": 256, 01:33:07.198 "data_size": 7936 01:33:07.198 } 01:33:07.198 ] 01:33:07.198 }' 01:33:07.198 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:07.456 "name": "raid_bdev1", 01:33:07.456 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:07.456 "strip_size_kb": 0, 01:33:07.456 "state": "online", 01:33:07.456 "raid_level": "raid1", 01:33:07.456 "superblock": true, 01:33:07.456 "num_base_bdevs": 2, 01:33:07.456 "num_base_bdevs_discovered": 2, 01:33:07.456 "num_base_bdevs_operational": 2, 01:33:07.456 "base_bdevs_list": [ 01:33:07.456 { 01:33:07.456 "name": "spare", 01:33:07.456 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:07.456 "is_configured": true, 01:33:07.456 "data_offset": 256, 01:33:07.456 "data_size": 7936 01:33:07.456 }, 01:33:07.456 { 01:33:07.456 "name": "BaseBdev2", 01:33:07.456 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:07.456 "is_configured": true, 01:33:07.456 "data_offset": 256, 01:33:07.456 "data_size": 7936 01:33:07.456 } 01:33:07.456 ] 01:33:07.456 }' 01:33:07.456 05:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:07.456 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:07.456 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:07.714 "name": "raid_bdev1", 01:33:07.714 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:07.714 "strip_size_kb": 0, 01:33:07.714 "state": "online", 01:33:07.714 "raid_level": "raid1", 01:33:07.714 "superblock": true, 01:33:07.714 "num_base_bdevs": 2, 01:33:07.714 "num_base_bdevs_discovered": 2, 01:33:07.714 "num_base_bdevs_operational": 2, 01:33:07.714 "base_bdevs_list": [ 01:33:07.714 { 01:33:07.714 "name": "spare", 01:33:07.714 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:07.714 "is_configured": true, 01:33:07.714 "data_offset": 256, 01:33:07.714 "data_size": 7936 01:33:07.714 }, 01:33:07.714 { 01:33:07.714 "name": "BaseBdev2", 01:33:07.714 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:07.714 "is_configured": true, 01:33:07.714 "data_offset": 256, 01:33:07.714 "data_size": 7936 01:33:07.714 } 01:33:07.714 ] 01:33:07.714 }' 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:07.714 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:07.972 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:33:07.972 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:07.972 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:07.972 [2024-12-09 05:27:59.586896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:07.972 [2024-12-09 05:27:59.587098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:33:07.972 [2024-12-09 05:27:59.587412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:07.972 [2024-12-09 05:27:59.587540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:07.972 [2024-12-09 05:27:59.587556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:33:08.232 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 01:33:08.490 /dev/nbd0 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:33:08.491 1+0 records in 01:33:08.491 1+0 records out 01:33:08.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257049 s, 15.9 MB/s 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:33:08.491 05:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 01:33:08.750 /dev/nbd1 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:33:08.750 1+0 records in 01:33:08.750 1+0 records out 01:33:08.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385499 s, 10.6 MB/s 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 01:33:08.750 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:33:08.751 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:33:08.751 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:33:09.011 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:33:09.270 05:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:09.529 [2024-12-09 05:28:01.057770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:33:09.529 [2024-12-09 05:28:01.057865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:09.529 [2024-12-09 05:28:01.057898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 01:33:09.529 [2024-12-09 05:28:01.057914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:09.529 [2024-12-09 05:28:01.060739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:09.529 [2024-12-09 05:28:01.060811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:33:09.529 [2024-12-09 05:28:01.060924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:33:09.529 [2024-12-09 05:28:01.061000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:09.529 [2024-12-09 05:28:01.061162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:33:09.529 spare 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:09.529 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:09.787 [2024-12-09 05:28:01.161281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:33:09.787 [2024-12-09 05:28:01.161316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 01:33:09.787 [2024-12-09 05:28:01.161469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 01:33:09.787 [2024-12-09 05:28:01.161699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:33:09.787 [2024-12-09 05:28:01.161717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:33:09.787 [2024-12-09 05:28:01.161944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:09.787 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:09.787 "name": "raid_bdev1", 01:33:09.787 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:09.787 "strip_size_kb": 0, 01:33:09.787 "state": "online", 01:33:09.787 "raid_level": "raid1", 01:33:09.787 "superblock": true, 01:33:09.787 "num_base_bdevs": 2, 01:33:09.787 "num_base_bdevs_discovered": 2, 01:33:09.787 "num_base_bdevs_operational": 2, 01:33:09.787 "base_bdevs_list": [ 01:33:09.788 { 01:33:09.788 "name": "spare", 01:33:09.788 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:09.788 "is_configured": true, 01:33:09.788 "data_offset": 256, 01:33:09.788 "data_size": 7936 01:33:09.788 }, 01:33:09.788 { 01:33:09.788 "name": "BaseBdev2", 01:33:09.788 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:09.788 "is_configured": true, 01:33:09.788 "data_offset": 256, 01:33:09.788 "data_size": 7936 01:33:09.788 } 01:33:09.788 ] 01:33:09.788 }' 01:33:09.788 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:09.788 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:10.354 "name": "raid_bdev1", 01:33:10.354 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:10.354 "strip_size_kb": 0, 01:33:10.354 "state": "online", 01:33:10.354 "raid_level": "raid1", 01:33:10.354 "superblock": true, 01:33:10.354 "num_base_bdevs": 2, 01:33:10.354 "num_base_bdevs_discovered": 2, 01:33:10.354 "num_base_bdevs_operational": 2, 01:33:10.354 "base_bdevs_list": [ 01:33:10.354 { 01:33:10.354 "name": "spare", 01:33:10.354 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:10.354 "is_configured": true, 01:33:10.354 "data_offset": 256, 01:33:10.354 "data_size": 7936 01:33:10.354 }, 01:33:10.354 { 01:33:10.354 "name": "BaseBdev2", 01:33:10.354 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:10.354 "is_configured": true, 01:33:10.354 "data_offset": 256, 01:33:10.354 "data_size": 7936 01:33:10.354 } 01:33:10.354 ] 01:33:10.354 }' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.354 [2024-12-09 05:28:01.894163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:10.354 "name": "raid_bdev1", 01:33:10.354 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:10.354 "strip_size_kb": 0, 01:33:10.354 "state": "online", 01:33:10.354 "raid_level": "raid1", 01:33:10.354 "superblock": true, 01:33:10.354 "num_base_bdevs": 2, 01:33:10.354 "num_base_bdevs_discovered": 1, 01:33:10.354 "num_base_bdevs_operational": 1, 01:33:10.354 "base_bdevs_list": [ 01:33:10.354 { 01:33:10.354 "name": null, 01:33:10.354 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:10.354 "is_configured": false, 01:33:10.354 "data_offset": 0, 01:33:10.354 "data_size": 7936 01:33:10.354 }, 01:33:10.354 { 01:33:10.354 "name": "BaseBdev2", 01:33:10.354 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:10.354 "is_configured": true, 01:33:10.354 "data_offset": 256, 01:33:10.354 "data_size": 7936 01:33:10.354 } 01:33:10.354 ] 01:33:10.354 }' 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:10.354 05:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.922 05:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:33:10.922 05:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:10.922 05:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:10.922 [2024-12-09 05:28:02.434397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:10.922 [2024-12-09 05:28:02.434667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:33:10.922 [2024-12-09 05:28:02.434694] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:33:10.922 [2024-12-09 05:28:02.434798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:10.922 [2024-12-09 05:28:02.448148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 01:33:10.922 05:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:10.922 05:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 01:33:10.922 [2024-12-09 05:28:02.450791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:11.861 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:12.126 "name": "raid_bdev1", 01:33:12.126 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:12.126 "strip_size_kb": 0, 01:33:12.126 "state": "online", 01:33:12.126 "raid_level": "raid1", 01:33:12.126 "superblock": true, 01:33:12.126 "num_base_bdevs": 2, 01:33:12.126 "num_base_bdevs_discovered": 2, 01:33:12.126 "num_base_bdevs_operational": 2, 01:33:12.126 "process": { 01:33:12.126 "type": "rebuild", 01:33:12.126 "target": "spare", 01:33:12.126 "progress": { 01:33:12.126 "blocks": 2560, 01:33:12.126 "percent": 32 01:33:12.126 } 01:33:12.126 }, 01:33:12.126 "base_bdevs_list": [ 01:33:12.126 { 01:33:12.126 "name": "spare", 01:33:12.126 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:12.126 "is_configured": true, 01:33:12.126 "data_offset": 256, 01:33:12.126 "data_size": 7936 01:33:12.126 }, 01:33:12.126 { 01:33:12.126 "name": "BaseBdev2", 01:33:12.126 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:12.126 "is_configured": true, 01:33:12.126 "data_offset": 256, 01:33:12.126 "data_size": 7936 01:33:12.126 } 01:33:12.126 ] 01:33:12.126 }' 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:12.126 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:12.126 [2024-12-09 05:28:03.624315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:12.126 [2024-12-09 05:28:03.660092] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:33:12.126 [2024-12-09 05:28:03.660406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:12.126 [2024-12-09 05:28:03.660660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:12.126 [2024-12-09 05:28:03.660725] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:12.127 "name": "raid_bdev1", 01:33:12.127 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:12.127 "strip_size_kb": 0, 01:33:12.127 "state": "online", 01:33:12.127 "raid_level": "raid1", 01:33:12.127 "superblock": true, 01:33:12.127 "num_base_bdevs": 2, 01:33:12.127 "num_base_bdevs_discovered": 1, 01:33:12.127 "num_base_bdevs_operational": 1, 01:33:12.127 "base_bdevs_list": [ 01:33:12.127 { 01:33:12.127 "name": null, 01:33:12.127 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:12.127 "is_configured": false, 01:33:12.127 "data_offset": 0, 01:33:12.127 "data_size": 7936 01:33:12.127 }, 01:33:12.127 { 01:33:12.127 "name": "BaseBdev2", 01:33:12.127 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:12.127 "is_configured": true, 01:33:12.127 "data_offset": 256, 01:33:12.127 "data_size": 7936 01:33:12.127 } 01:33:12.127 ] 01:33:12.127 }' 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:12.127 05:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:12.694 05:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:33:12.695 05:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:12.695 05:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:12.695 [2024-12-09 05:28:04.210113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:33:12.695 [2024-12-09 05:28:04.210192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:12.695 [2024-12-09 05:28:04.210229] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 01:33:12.695 [2024-12-09 05:28:04.210248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:12.695 [2024-12-09 05:28:04.210592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:12.695 [2024-12-09 05:28:04.210622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:33:12.695 [2024-12-09 05:28:04.210728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:33:12.695 [2024-12-09 05:28:04.210750] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:33:12.695 [2024-12-09 05:28:04.210764] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:33:12.695 [2024-12-09 05:28:04.210793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:12.695 [2024-12-09 05:28:04.224245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 01:33:12.695 spare 01:33:12.695 05:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:12.695 05:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 01:33:12.695 [2024-12-09 05:28:04.227003] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:13.630 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:13.888 "name": "raid_bdev1", 01:33:13.888 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:13.888 "strip_size_kb": 0, 01:33:13.888 "state": "online", 01:33:13.888 "raid_level": "raid1", 01:33:13.888 "superblock": true, 01:33:13.888 "num_base_bdevs": 2, 01:33:13.888 "num_base_bdevs_discovered": 2, 01:33:13.888 "num_base_bdevs_operational": 2, 01:33:13.888 "process": { 01:33:13.888 "type": "rebuild", 01:33:13.888 "target": "spare", 01:33:13.888 "progress": { 01:33:13.888 "blocks": 2560, 01:33:13.888 "percent": 32 01:33:13.888 } 01:33:13.888 }, 01:33:13.888 "base_bdevs_list": [ 01:33:13.888 { 01:33:13.888 "name": "spare", 01:33:13.888 "uuid": "8834343a-dc91-523d-aec4-e0ee545eb4f3", 01:33:13.888 "is_configured": true, 01:33:13.888 "data_offset": 256, 01:33:13.888 "data_size": 7936 01:33:13.888 }, 01:33:13.888 { 01:33:13.888 "name": "BaseBdev2", 01:33:13.888 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:13.888 "is_configured": true, 01:33:13.888 "data_offset": 256, 01:33:13.888 "data_size": 7936 01:33:13.888 } 01:33:13.888 ] 01:33:13.888 }' 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:13.888 [2024-12-09 05:28:05.392354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:13.888 [2024-12-09 05:28:05.435584] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:33:13.888 [2024-12-09 05:28:05.435840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:13.888 [2024-12-09 05:28:05.435873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:13.888 [2024-12-09 05:28:05.435885] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:13.888 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:14.146 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:14.146 "name": "raid_bdev1", 01:33:14.146 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:14.146 "strip_size_kb": 0, 01:33:14.146 "state": "online", 01:33:14.146 "raid_level": "raid1", 01:33:14.146 "superblock": true, 01:33:14.146 "num_base_bdevs": 2, 01:33:14.146 "num_base_bdevs_discovered": 1, 01:33:14.146 "num_base_bdevs_operational": 1, 01:33:14.146 "base_bdevs_list": [ 01:33:14.146 { 01:33:14.146 "name": null, 01:33:14.146 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:14.146 "is_configured": false, 01:33:14.146 "data_offset": 0, 01:33:14.146 "data_size": 7936 01:33:14.146 }, 01:33:14.146 { 01:33:14.146 "name": "BaseBdev2", 01:33:14.146 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:14.146 "is_configured": true, 01:33:14.146 "data_offset": 256, 01:33:14.146 "data_size": 7936 01:33:14.146 } 01:33:14.146 ] 01:33:14.146 }' 01:33:14.146 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:14.146 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:14.404 05:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:14.661 "name": "raid_bdev1", 01:33:14.661 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:14.661 "strip_size_kb": 0, 01:33:14.661 "state": "online", 01:33:14.661 "raid_level": "raid1", 01:33:14.661 "superblock": true, 01:33:14.661 "num_base_bdevs": 2, 01:33:14.661 "num_base_bdevs_discovered": 1, 01:33:14.661 "num_base_bdevs_operational": 1, 01:33:14.661 "base_bdevs_list": [ 01:33:14.661 { 01:33:14.661 "name": null, 01:33:14.661 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:14.661 "is_configured": false, 01:33:14.661 "data_offset": 0, 01:33:14.661 "data_size": 7936 01:33:14.661 }, 01:33:14.661 { 01:33:14.661 "name": "BaseBdev2", 01:33:14.661 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:14.661 "is_configured": true, 01:33:14.661 "data_offset": 256, 01:33:14.661 "data_size": 7936 01:33:14.661 } 01:33:14.661 ] 01:33:14.661 }' 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:14.661 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:14.661 [2024-12-09 05:28:06.141516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:33:14.661 [2024-12-09 05:28:06.141615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:14.661 [2024-12-09 05:28:06.141648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 01:33:14.661 [2024-12-09 05:28:06.141664] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:14.661 [2024-12-09 05:28:06.141983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:14.661 [2024-12-09 05:28:06.142005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:33:14.661 [2024-12-09 05:28:06.142084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:33:14.661 [2024-12-09 05:28:06.142119] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:33:14.661 [2024-12-09 05:28:06.142148] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:33:14.662 [2024-12-09 05:28:06.142161] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:33:14.662 BaseBdev1 01:33:14.662 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:14.662 05:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:15.596 "name": "raid_bdev1", 01:33:15.596 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:15.596 "strip_size_kb": 0, 01:33:15.596 "state": "online", 01:33:15.596 "raid_level": "raid1", 01:33:15.596 "superblock": true, 01:33:15.596 "num_base_bdevs": 2, 01:33:15.596 "num_base_bdevs_discovered": 1, 01:33:15.596 "num_base_bdevs_operational": 1, 01:33:15.596 "base_bdevs_list": [ 01:33:15.596 { 01:33:15.596 "name": null, 01:33:15.596 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:15.596 "is_configured": false, 01:33:15.596 "data_offset": 0, 01:33:15.596 "data_size": 7936 01:33:15.596 }, 01:33:15.596 { 01:33:15.596 "name": "BaseBdev2", 01:33:15.596 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:15.596 "is_configured": true, 01:33:15.596 "data_offset": 256, 01:33:15.596 "data_size": 7936 01:33:15.596 } 01:33:15.596 ] 01:33:15.596 }' 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:15.596 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:16.222 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:16.222 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:16.222 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:16.223 "name": "raid_bdev1", 01:33:16.223 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:16.223 "strip_size_kb": 0, 01:33:16.223 "state": "online", 01:33:16.223 "raid_level": "raid1", 01:33:16.223 "superblock": true, 01:33:16.223 "num_base_bdevs": 2, 01:33:16.223 "num_base_bdevs_discovered": 1, 01:33:16.223 "num_base_bdevs_operational": 1, 01:33:16.223 "base_bdevs_list": [ 01:33:16.223 { 01:33:16.223 "name": null, 01:33:16.223 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:16.223 "is_configured": false, 01:33:16.223 "data_offset": 0, 01:33:16.223 "data_size": 7936 01:33:16.223 }, 01:33:16.223 { 01:33:16.223 "name": "BaseBdev2", 01:33:16.223 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:16.223 "is_configured": true, 01:33:16.223 "data_offset": 256, 01:33:16.223 "data_size": 7936 01:33:16.223 } 01:33:16.223 ] 01:33:16.223 }' 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:16.223 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:16.480 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:16.480 [2024-12-09 05:28:07.818132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:33:16.480 [2024-12-09 05:28:07.818400] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:33:16.481 [2024-12-09 05:28:07.818428] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:33:16.481 request: 01:33:16.481 { 01:33:16.481 "base_bdev": "BaseBdev1", 01:33:16.481 "raid_bdev": "raid_bdev1", 01:33:16.481 "method": "bdev_raid_add_base_bdev", 01:33:16.481 "req_id": 1 01:33:16.481 } 01:33:16.481 Got JSON-RPC error response 01:33:16.481 response: 01:33:16.481 { 01:33:16.481 "code": -22, 01:33:16.481 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:33:16.481 } 01:33:16.481 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:33:16.481 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 01:33:16.481 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:33:16.481 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:33:16.481 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:33:16.481 05:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:17.414 "name": "raid_bdev1", 01:33:17.414 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:17.414 "strip_size_kb": 0, 01:33:17.414 "state": "online", 01:33:17.414 "raid_level": "raid1", 01:33:17.414 "superblock": true, 01:33:17.414 "num_base_bdevs": 2, 01:33:17.414 "num_base_bdevs_discovered": 1, 01:33:17.414 "num_base_bdevs_operational": 1, 01:33:17.414 "base_bdevs_list": [ 01:33:17.414 { 01:33:17.414 "name": null, 01:33:17.414 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:17.414 "is_configured": false, 01:33:17.414 "data_offset": 0, 01:33:17.414 "data_size": 7936 01:33:17.414 }, 01:33:17.414 { 01:33:17.414 "name": "BaseBdev2", 01:33:17.414 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:17.414 "is_configured": true, 01:33:17.414 "data_offset": 256, 01:33:17.414 "data_size": 7936 01:33:17.414 } 01:33:17.414 ] 01:33:17.414 }' 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:17.414 05:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:17.980 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:17.980 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:17.981 "name": "raid_bdev1", 01:33:17.981 "uuid": "8003a835-d8e3-4c45-af3a-aa004e3141fc", 01:33:17.981 "strip_size_kb": 0, 01:33:17.981 "state": "online", 01:33:17.981 "raid_level": "raid1", 01:33:17.981 "superblock": true, 01:33:17.981 "num_base_bdevs": 2, 01:33:17.981 "num_base_bdevs_discovered": 1, 01:33:17.981 "num_base_bdevs_operational": 1, 01:33:17.981 "base_bdevs_list": [ 01:33:17.981 { 01:33:17.981 "name": null, 01:33:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:17.981 "is_configured": false, 01:33:17.981 "data_offset": 0, 01:33:17.981 "data_size": 7936 01:33:17.981 }, 01:33:17.981 { 01:33:17.981 "name": "BaseBdev2", 01:33:17.981 "uuid": "eab057d5-e0ab-5ccf-8825-52a976c84aa5", 01:33:17.981 "is_configured": true, 01:33:17.981 "data_offset": 256, 01:33:17.981 "data_size": 7936 01:33:17.981 } 01:33:17.981 ] 01:33:17.981 }' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88160 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88160 ']' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88160 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88160 01:33:17.981 killing process with pid 88160 01:33:17.981 Received shutdown signal, test time was about 60.000000 seconds 01:33:17.981 01:33:17.981 Latency(us) 01:33:17.981 [2024-12-09T05:28:09.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:17.981 [2024-12-09T05:28:09.598Z] =================================================================================================================== 01:33:17.981 [2024-12-09T05:28:09.598Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88160' 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88160 01:33:17.981 [2024-12-09 05:28:09.538516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:33:17.981 05:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88160 01:33:17.981 [2024-12-09 05:28:09.538727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:17.981 [2024-12-09 05:28:09.538851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:17.981 [2024-12-09 05:28:09.538902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:33:18.239 [2024-12-09 05:28:09.805885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:33:19.617 ************************************ 01:33:19.617 END TEST raid_rebuild_test_sb_md_separate 01:33:19.617 ************************************ 01:33:19.617 05:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 01:33:19.617 01:33:19.617 real 0m21.520s 01:33:19.617 user 0m29.033s 01:33:19.617 sys 0m2.515s 01:33:19.617 05:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:19.617 05:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 01:33:19.617 05:28:10 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 01:33:19.617 05:28:10 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 01:33:19.617 05:28:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:33:19.617 05:28:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:19.617 05:28:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:33:19.617 ************************************ 01:33:19.617 START TEST raid_state_function_test_sb_md_interleaved 01:33:19.617 ************************************ 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88858 01:33:19.617 Process raid pid: 88858 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88858' 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88858 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88858 ']' 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:19.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:19.617 05:28:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:19.617 [2024-12-09 05:28:11.078817] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:19.617 [2024-12-09 05:28:11.078977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:33:19.880 [2024-12-09 05:28:11.254196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:19.880 [2024-12-09 05:28:11.380187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:20.137 [2024-12-09 05:28:11.576826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:33:20.137 [2024-12-09 05:28:11.576868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:20.705 [2024-12-09 05:28:12.062038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:33:20.705 [2024-12-09 05:28:12.062106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:33:20.705 [2024-12-09 05:28:12.062125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:33:20.705 [2024-12-09 05:28:12.062142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:20.705 "name": "Existed_Raid", 01:33:20.705 "uuid": "5d60549b-b8da-420a-b267-126080de59c5", 01:33:20.705 "strip_size_kb": 0, 01:33:20.705 "state": "configuring", 01:33:20.705 "raid_level": "raid1", 01:33:20.705 "superblock": true, 01:33:20.705 "num_base_bdevs": 2, 01:33:20.705 "num_base_bdevs_discovered": 0, 01:33:20.705 "num_base_bdevs_operational": 2, 01:33:20.705 "base_bdevs_list": [ 01:33:20.705 { 01:33:20.705 "name": "BaseBdev1", 01:33:20.705 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:20.705 "is_configured": false, 01:33:20.705 "data_offset": 0, 01:33:20.705 "data_size": 0 01:33:20.705 }, 01:33:20.705 { 01:33:20.705 "name": "BaseBdev2", 01:33:20.705 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:20.705 "is_configured": false, 01:33:20.705 "data_offset": 0, 01:33:20.705 "data_size": 0 01:33:20.705 } 01:33:20.705 ] 01:33:20.705 }' 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:20.705 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:20.964 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:33:20.964 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:20.964 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.223 [2024-12-09 05:28:12.582143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:33:21.223 [2024-12-09 05:28:12.582197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 01:33:21.223 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.223 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:33:21.223 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.223 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.223 [2024-12-09 05:28:12.590101] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 01:33:21.223 [2024-12-09 05:28:12.590194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 01:33:21.223 [2024-12-09 05:28:12.590209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:33:21.223 [2024-12-09 05:28:12.590226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.224 [2024-12-09 05:28:12.633232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:33:21.224 BaseBdev1 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.224 [ 01:33:21.224 { 01:33:21.224 "name": "BaseBdev1", 01:33:21.224 "aliases": [ 01:33:21.224 "95b1013b-4e68-4c35-ba69-65404eaaea2d" 01:33:21.224 ], 01:33:21.224 "product_name": "Malloc disk", 01:33:21.224 "block_size": 4128, 01:33:21.224 "num_blocks": 8192, 01:33:21.224 "uuid": "95b1013b-4e68-4c35-ba69-65404eaaea2d", 01:33:21.224 "md_size": 32, 01:33:21.224 "md_interleave": true, 01:33:21.224 "dif_type": 0, 01:33:21.224 "assigned_rate_limits": { 01:33:21.224 "rw_ios_per_sec": 0, 01:33:21.224 "rw_mbytes_per_sec": 0, 01:33:21.224 "r_mbytes_per_sec": 0, 01:33:21.224 "w_mbytes_per_sec": 0 01:33:21.224 }, 01:33:21.224 "claimed": true, 01:33:21.224 "claim_type": "exclusive_write", 01:33:21.224 "zoned": false, 01:33:21.224 "supported_io_types": { 01:33:21.224 "read": true, 01:33:21.224 "write": true, 01:33:21.224 "unmap": true, 01:33:21.224 "flush": true, 01:33:21.224 "reset": true, 01:33:21.224 "nvme_admin": false, 01:33:21.224 "nvme_io": false, 01:33:21.224 "nvme_io_md": false, 01:33:21.224 "write_zeroes": true, 01:33:21.224 "zcopy": true, 01:33:21.224 "get_zone_info": false, 01:33:21.224 "zone_management": false, 01:33:21.224 "zone_append": false, 01:33:21.224 "compare": false, 01:33:21.224 "compare_and_write": false, 01:33:21.224 "abort": true, 01:33:21.224 "seek_hole": false, 01:33:21.224 "seek_data": false, 01:33:21.224 "copy": true, 01:33:21.224 "nvme_iov_md": false 01:33:21.224 }, 01:33:21.224 "memory_domains": [ 01:33:21.224 { 01:33:21.224 "dma_device_id": "system", 01:33:21.224 "dma_device_type": 1 01:33:21.224 }, 01:33:21.224 { 01:33:21.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:21.224 "dma_device_type": 2 01:33:21.224 } 01:33:21.224 ], 01:33:21.224 "driver_specific": {} 01:33:21.224 } 01:33:21.224 ] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:21.224 "name": "Existed_Raid", 01:33:21.224 "uuid": "571971ec-af29-4e8f-9eb3-216cc61c8b9f", 01:33:21.224 "strip_size_kb": 0, 01:33:21.224 "state": "configuring", 01:33:21.224 "raid_level": "raid1", 01:33:21.224 "superblock": true, 01:33:21.224 "num_base_bdevs": 2, 01:33:21.224 "num_base_bdevs_discovered": 1, 01:33:21.224 "num_base_bdevs_operational": 2, 01:33:21.224 "base_bdevs_list": [ 01:33:21.224 { 01:33:21.224 "name": "BaseBdev1", 01:33:21.224 "uuid": "95b1013b-4e68-4c35-ba69-65404eaaea2d", 01:33:21.224 "is_configured": true, 01:33:21.224 "data_offset": 256, 01:33:21.224 "data_size": 7936 01:33:21.224 }, 01:33:21.224 { 01:33:21.224 "name": "BaseBdev2", 01:33:21.224 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:21.224 "is_configured": false, 01:33:21.224 "data_offset": 0, 01:33:21.224 "data_size": 0 01:33:21.224 } 01:33:21.224 ] 01:33:21.224 }' 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:21.224 05:28:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.792 [2024-12-09 05:28:13.189472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 01:33:21.792 [2024-12-09 05:28:13.189529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.792 [2024-12-09 05:28:13.197532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:33:21.792 [2024-12-09 05:28:13.200077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 01:33:21.792 [2024-12-09 05:28:13.200143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:21.792 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:33:21.793 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:21.793 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:21.793 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:21.793 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:21.793 "name": "Existed_Raid", 01:33:21.793 "uuid": "7c5b3947-4965-4b34-992a-9801da3c0cca", 01:33:21.793 "strip_size_kb": 0, 01:33:21.793 "state": "configuring", 01:33:21.793 "raid_level": "raid1", 01:33:21.793 "superblock": true, 01:33:21.793 "num_base_bdevs": 2, 01:33:21.793 "num_base_bdevs_discovered": 1, 01:33:21.793 "num_base_bdevs_operational": 2, 01:33:21.793 "base_bdevs_list": [ 01:33:21.793 { 01:33:21.793 "name": "BaseBdev1", 01:33:21.793 "uuid": "95b1013b-4e68-4c35-ba69-65404eaaea2d", 01:33:21.793 "is_configured": true, 01:33:21.793 "data_offset": 256, 01:33:21.793 "data_size": 7936 01:33:21.793 }, 01:33:21.793 { 01:33:21.793 "name": "BaseBdev2", 01:33:21.793 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:21.793 "is_configured": false, 01:33:21.793 "data_offset": 0, 01:33:21.793 "data_size": 0 01:33:21.793 } 01:33:21.793 ] 01:33:21.793 }' 01:33:21.793 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:21.793 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.361 [2024-12-09 05:28:13.744097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:33:22.361 [2024-12-09 05:28:13.744352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:33:22.361 [2024-12-09 05:28:13.744395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:22.361 [2024-12-09 05:28:13.744519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:33:22.361 [2024-12-09 05:28:13.744615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:33:22.361 [2024-12-09 05:28:13.744633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 01:33:22.361 BaseBdev2 01:33:22.361 [2024-12-09 05:28:13.744718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.361 [ 01:33:22.361 { 01:33:22.361 "name": "BaseBdev2", 01:33:22.361 "aliases": [ 01:33:22.361 "369ce517-b876-42d1-8926-f3ff57b2ee61" 01:33:22.361 ], 01:33:22.361 "product_name": "Malloc disk", 01:33:22.361 "block_size": 4128, 01:33:22.361 "num_blocks": 8192, 01:33:22.361 "uuid": "369ce517-b876-42d1-8926-f3ff57b2ee61", 01:33:22.361 "md_size": 32, 01:33:22.361 "md_interleave": true, 01:33:22.361 "dif_type": 0, 01:33:22.361 "assigned_rate_limits": { 01:33:22.361 "rw_ios_per_sec": 0, 01:33:22.361 "rw_mbytes_per_sec": 0, 01:33:22.361 "r_mbytes_per_sec": 0, 01:33:22.361 "w_mbytes_per_sec": 0 01:33:22.361 }, 01:33:22.361 "claimed": true, 01:33:22.361 "claim_type": "exclusive_write", 01:33:22.361 "zoned": false, 01:33:22.361 "supported_io_types": { 01:33:22.361 "read": true, 01:33:22.361 "write": true, 01:33:22.361 "unmap": true, 01:33:22.361 "flush": true, 01:33:22.361 "reset": true, 01:33:22.361 "nvme_admin": false, 01:33:22.361 "nvme_io": false, 01:33:22.361 "nvme_io_md": false, 01:33:22.361 "write_zeroes": true, 01:33:22.361 "zcopy": true, 01:33:22.361 "get_zone_info": false, 01:33:22.361 "zone_management": false, 01:33:22.361 "zone_append": false, 01:33:22.361 "compare": false, 01:33:22.361 "compare_and_write": false, 01:33:22.361 "abort": true, 01:33:22.361 "seek_hole": false, 01:33:22.361 "seek_data": false, 01:33:22.361 "copy": true, 01:33:22.361 "nvme_iov_md": false 01:33:22.361 }, 01:33:22.361 "memory_domains": [ 01:33:22.361 { 01:33:22.361 "dma_device_id": "system", 01:33:22.361 "dma_device_type": 1 01:33:22.361 }, 01:33:22.361 { 01:33:22.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:22.361 "dma_device_type": 2 01:33:22.361 } 01:33:22.361 ], 01:33:22.361 "driver_specific": {} 01:33:22.361 } 01:33:22.361 ] 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:22.361 "name": "Existed_Raid", 01:33:22.361 "uuid": "7c5b3947-4965-4b34-992a-9801da3c0cca", 01:33:22.361 "strip_size_kb": 0, 01:33:22.361 "state": "online", 01:33:22.361 "raid_level": "raid1", 01:33:22.361 "superblock": true, 01:33:22.361 "num_base_bdevs": 2, 01:33:22.361 "num_base_bdevs_discovered": 2, 01:33:22.361 "num_base_bdevs_operational": 2, 01:33:22.361 "base_bdevs_list": [ 01:33:22.361 { 01:33:22.361 "name": "BaseBdev1", 01:33:22.361 "uuid": "95b1013b-4e68-4c35-ba69-65404eaaea2d", 01:33:22.361 "is_configured": true, 01:33:22.361 "data_offset": 256, 01:33:22.361 "data_size": 7936 01:33:22.361 }, 01:33:22.361 { 01:33:22.361 "name": "BaseBdev2", 01:33:22.361 "uuid": "369ce517-b876-42d1-8926-f3ff57b2ee61", 01:33:22.361 "is_configured": true, 01:33:22.361 "data_offset": 256, 01:33:22.361 "data_size": 7936 01:33:22.361 } 01:33:22.361 ] 01:33:22.361 }' 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:22.361 05:28:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:33:22.929 [2024-12-09 05:28:14.300968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:33:22.929 "name": "Existed_Raid", 01:33:22.929 "aliases": [ 01:33:22.929 "7c5b3947-4965-4b34-992a-9801da3c0cca" 01:33:22.929 ], 01:33:22.929 "product_name": "Raid Volume", 01:33:22.929 "block_size": 4128, 01:33:22.929 "num_blocks": 7936, 01:33:22.929 "uuid": "7c5b3947-4965-4b34-992a-9801da3c0cca", 01:33:22.929 "md_size": 32, 01:33:22.929 "md_interleave": true, 01:33:22.929 "dif_type": 0, 01:33:22.929 "assigned_rate_limits": { 01:33:22.929 "rw_ios_per_sec": 0, 01:33:22.929 "rw_mbytes_per_sec": 0, 01:33:22.929 "r_mbytes_per_sec": 0, 01:33:22.929 "w_mbytes_per_sec": 0 01:33:22.929 }, 01:33:22.929 "claimed": false, 01:33:22.929 "zoned": false, 01:33:22.929 "supported_io_types": { 01:33:22.929 "read": true, 01:33:22.929 "write": true, 01:33:22.929 "unmap": false, 01:33:22.929 "flush": false, 01:33:22.929 "reset": true, 01:33:22.929 "nvme_admin": false, 01:33:22.929 "nvme_io": false, 01:33:22.929 "nvme_io_md": false, 01:33:22.929 "write_zeroes": true, 01:33:22.929 "zcopy": false, 01:33:22.929 "get_zone_info": false, 01:33:22.929 "zone_management": false, 01:33:22.929 "zone_append": false, 01:33:22.929 "compare": false, 01:33:22.929 "compare_and_write": false, 01:33:22.929 "abort": false, 01:33:22.929 "seek_hole": false, 01:33:22.929 "seek_data": false, 01:33:22.929 "copy": false, 01:33:22.929 "nvme_iov_md": false 01:33:22.929 }, 01:33:22.929 "memory_domains": [ 01:33:22.929 { 01:33:22.929 "dma_device_id": "system", 01:33:22.929 "dma_device_type": 1 01:33:22.929 }, 01:33:22.929 { 01:33:22.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:22.929 "dma_device_type": 2 01:33:22.929 }, 01:33:22.929 { 01:33:22.929 "dma_device_id": "system", 01:33:22.929 "dma_device_type": 1 01:33:22.929 }, 01:33:22.929 { 01:33:22.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:22.929 "dma_device_type": 2 01:33:22.929 } 01:33:22.929 ], 01:33:22.929 "driver_specific": { 01:33:22.929 "raid": { 01:33:22.929 "uuid": "7c5b3947-4965-4b34-992a-9801da3c0cca", 01:33:22.929 "strip_size_kb": 0, 01:33:22.929 "state": "online", 01:33:22.929 "raid_level": "raid1", 01:33:22.929 "superblock": true, 01:33:22.929 "num_base_bdevs": 2, 01:33:22.929 "num_base_bdevs_discovered": 2, 01:33:22.929 "num_base_bdevs_operational": 2, 01:33:22.929 "base_bdevs_list": [ 01:33:22.929 { 01:33:22.929 "name": "BaseBdev1", 01:33:22.929 "uuid": "95b1013b-4e68-4c35-ba69-65404eaaea2d", 01:33:22.929 "is_configured": true, 01:33:22.929 "data_offset": 256, 01:33:22.929 "data_size": 7936 01:33:22.929 }, 01:33:22.929 { 01:33:22.929 "name": "BaseBdev2", 01:33:22.929 "uuid": "369ce517-b876-42d1-8926-f3ff57b2ee61", 01:33:22.929 "is_configured": true, 01:33:22.929 "data_offset": 256, 01:33:22.929 "data_size": 7936 01:33:22.929 } 01:33:22.929 ] 01:33:22.929 } 01:33:22.929 } 01:33:22.929 }' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 01:33:22.929 BaseBdev2' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:22.929 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:23.188 [2024-12-09 05:28:14.556500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:23.188 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:23.189 "name": "Existed_Raid", 01:33:23.189 "uuid": "7c5b3947-4965-4b34-992a-9801da3c0cca", 01:33:23.189 "strip_size_kb": 0, 01:33:23.189 "state": "online", 01:33:23.189 "raid_level": "raid1", 01:33:23.189 "superblock": true, 01:33:23.189 "num_base_bdevs": 2, 01:33:23.189 "num_base_bdevs_discovered": 1, 01:33:23.189 "num_base_bdevs_operational": 1, 01:33:23.189 "base_bdevs_list": [ 01:33:23.189 { 01:33:23.189 "name": null, 01:33:23.189 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:23.189 "is_configured": false, 01:33:23.189 "data_offset": 0, 01:33:23.189 "data_size": 7936 01:33:23.189 }, 01:33:23.189 { 01:33:23.189 "name": "BaseBdev2", 01:33:23.189 "uuid": "369ce517-b876-42d1-8926-f3ff57b2ee61", 01:33:23.189 "is_configured": true, 01:33:23.189 "data_offset": 256, 01:33:23.189 "data_size": 7936 01:33:23.189 } 01:33:23.189 ] 01:33:23.189 }' 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:23.189 05:28:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:23.755 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:23.756 [2024-12-09 05:28:15.231439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 01:33:23.756 [2024-12-09 05:28:15.231684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:33:23.756 [2024-12-09 05:28:15.319340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:23.756 [2024-12-09 05:28:15.319446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:23.756 [2024-12-09 05:28:15.319466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 01:33:23.756 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88858 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88858 ']' 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88858 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88858 01:33:24.013 killing process with pid 88858 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88858' 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88858 01:33:24.013 [2024-12-09 05:28:15.412242] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:33:24.013 05:28:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88858 01:33:24.013 [2024-12-09 05:28:15.427200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:33:25.386 05:28:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 01:33:25.386 ************************************ 01:33:25.386 END TEST raid_state_function_test_sb_md_interleaved 01:33:25.386 ************************************ 01:33:25.386 01:33:25.386 real 0m5.632s 01:33:25.386 user 0m8.425s 01:33:25.386 sys 0m0.797s 01:33:25.386 05:28:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:25.386 05:28:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:25.386 05:28:16 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 01:33:25.386 05:28:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:33:25.386 05:28:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:25.386 05:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:33:25.386 ************************************ 01:33:25.386 START TEST raid_superblock_test_md_interleaved 01:33:25.386 ************************************ 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89116 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89116 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89116 ']' 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:25.386 05:28:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:25.386 [2024-12-09 05:28:16.777658] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:25.386 [2024-12-09 05:28:16.777824] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89116 ] 01:33:25.386 [2024-12-09 05:28:16.951837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:25.644 [2024-12-09 05:28:17.087616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:25.902 [2024-12-09 05:28:17.311158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:33:25.902 [2024-12-09 05:28:17.311221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.160 malloc1 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.160 [2024-12-09 05:28:17.748890] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:33:26.160 [2024-12-09 05:28:17.748976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:26.160 [2024-12-09 05:28:17.749010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:33:26.160 [2024-12-09 05:28:17.749025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:26.160 [2024-12-09 05:28:17.751492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:26.160 [2024-12-09 05:28:17.751821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:33:26.160 pt1 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.160 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.418 malloc2 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.418 [2024-12-09 05:28:17.805959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:33:26.418 [2024-12-09 05:28:17.806033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:26.418 [2024-12-09 05:28:17.806067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:33:26.418 [2024-12-09 05:28:17.806109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:26.418 [2024-12-09 05:28:17.808966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:26.418 [2024-12-09 05:28:17.809005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:33:26.418 pt2 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.418 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.418 [2024-12-09 05:28:17.813999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:33:26.418 [2024-12-09 05:28:17.816684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:33:26.418 [2024-12-09 05:28:17.816939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:33:26.418 [2024-12-09 05:28:17.816957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:26.418 [2024-12-09 05:28:17.817045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 01:33:26.418 [2024-12-09 05:28:17.817140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:33:26.419 [2024-12-09 05:28:17.817159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:33:26.419 [2024-12-09 05:28:17.817242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:26.419 "name": "raid_bdev1", 01:33:26.419 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:26.419 "strip_size_kb": 0, 01:33:26.419 "state": "online", 01:33:26.419 "raid_level": "raid1", 01:33:26.419 "superblock": true, 01:33:26.419 "num_base_bdevs": 2, 01:33:26.419 "num_base_bdevs_discovered": 2, 01:33:26.419 "num_base_bdevs_operational": 2, 01:33:26.419 "base_bdevs_list": [ 01:33:26.419 { 01:33:26.419 "name": "pt1", 01:33:26.419 "uuid": "00000000-0000-0000-0000-000000000001", 01:33:26.419 "is_configured": true, 01:33:26.419 "data_offset": 256, 01:33:26.419 "data_size": 7936 01:33:26.419 }, 01:33:26.419 { 01:33:26.419 "name": "pt2", 01:33:26.419 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:26.419 "is_configured": true, 01:33:26.419 "data_offset": 256, 01:33:26.419 "data_size": 7936 01:33:26.419 } 01:33:26.419 ] 01:33:26.419 }' 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:26.419 05:28:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.985 [2024-12-09 05:28:18.374594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:33:26.985 "name": "raid_bdev1", 01:33:26.985 "aliases": [ 01:33:26.985 "7ff9a811-d371-4610-bbd8-64c3bafa9562" 01:33:26.985 ], 01:33:26.985 "product_name": "Raid Volume", 01:33:26.985 "block_size": 4128, 01:33:26.985 "num_blocks": 7936, 01:33:26.985 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:26.985 "md_size": 32, 01:33:26.985 "md_interleave": true, 01:33:26.985 "dif_type": 0, 01:33:26.985 "assigned_rate_limits": { 01:33:26.985 "rw_ios_per_sec": 0, 01:33:26.985 "rw_mbytes_per_sec": 0, 01:33:26.985 "r_mbytes_per_sec": 0, 01:33:26.985 "w_mbytes_per_sec": 0 01:33:26.985 }, 01:33:26.985 "claimed": false, 01:33:26.985 "zoned": false, 01:33:26.985 "supported_io_types": { 01:33:26.985 "read": true, 01:33:26.985 "write": true, 01:33:26.985 "unmap": false, 01:33:26.985 "flush": false, 01:33:26.985 "reset": true, 01:33:26.985 "nvme_admin": false, 01:33:26.985 "nvme_io": false, 01:33:26.985 "nvme_io_md": false, 01:33:26.985 "write_zeroes": true, 01:33:26.985 "zcopy": false, 01:33:26.985 "get_zone_info": false, 01:33:26.985 "zone_management": false, 01:33:26.985 "zone_append": false, 01:33:26.985 "compare": false, 01:33:26.985 "compare_and_write": false, 01:33:26.985 "abort": false, 01:33:26.985 "seek_hole": false, 01:33:26.985 "seek_data": false, 01:33:26.985 "copy": false, 01:33:26.985 "nvme_iov_md": false 01:33:26.985 }, 01:33:26.985 "memory_domains": [ 01:33:26.985 { 01:33:26.985 "dma_device_id": "system", 01:33:26.985 "dma_device_type": 1 01:33:26.985 }, 01:33:26.985 { 01:33:26.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:26.985 "dma_device_type": 2 01:33:26.985 }, 01:33:26.985 { 01:33:26.985 "dma_device_id": "system", 01:33:26.985 "dma_device_type": 1 01:33:26.985 }, 01:33:26.985 { 01:33:26.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:26.985 "dma_device_type": 2 01:33:26.985 } 01:33:26.985 ], 01:33:26.985 "driver_specific": { 01:33:26.985 "raid": { 01:33:26.985 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:26.985 "strip_size_kb": 0, 01:33:26.985 "state": "online", 01:33:26.985 "raid_level": "raid1", 01:33:26.985 "superblock": true, 01:33:26.985 "num_base_bdevs": 2, 01:33:26.985 "num_base_bdevs_discovered": 2, 01:33:26.985 "num_base_bdevs_operational": 2, 01:33:26.985 "base_bdevs_list": [ 01:33:26.985 { 01:33:26.985 "name": "pt1", 01:33:26.985 "uuid": "00000000-0000-0000-0000-000000000001", 01:33:26.985 "is_configured": true, 01:33:26.985 "data_offset": 256, 01:33:26.985 "data_size": 7936 01:33:26.985 }, 01:33:26.985 { 01:33:26.985 "name": "pt2", 01:33:26.985 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:26.985 "is_configured": true, 01:33:26.985 "data_offset": 256, 01:33:26.985 "data_size": 7936 01:33:26.985 } 01:33:26.985 ] 01:33:26.985 } 01:33:26.985 } 01:33:26.985 }' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:33:26.985 pt2' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:26.985 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 01:33:27.244 [2024-12-09 05:28:18.618490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7ff9a811-d371-4610-bbd8-64c3bafa9562 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7ff9a811-d371-4610-bbd8-64c3bafa9562 ']' 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 [2024-12-09 05:28:18.658191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:27.244 [2024-12-09 05:28:18.658365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:33:27.244 [2024-12-09 05:28:18.658624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:27.244 [2024-12-09 05:28:18.658817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:27.244 [2024-12-09 05:28:18.658994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.244 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.244 [2024-12-09 05:28:18.782217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 01:33:27.244 [2024-12-09 05:28:18.784822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 01:33:27.244 [2024-12-09 05:28:18.784929] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 01:33:27.245 [2024-12-09 05:28:18.785008] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 01:33:27.245 [2024-12-09 05:28:18.785034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:27.245 [2024-12-09 05:28:18.785050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 01:33:27.245 request: 01:33:27.245 { 01:33:27.245 "name": "raid_bdev1", 01:33:27.245 "raid_level": "raid1", 01:33:27.245 "base_bdevs": [ 01:33:27.245 "malloc1", 01:33:27.245 "malloc2" 01:33:27.245 ], 01:33:27.245 "superblock": false, 01:33:27.245 "method": "bdev_raid_create", 01:33:27.245 "req_id": 1 01:33:27.245 } 01:33:27.245 Got JSON-RPC error response 01:33:27.245 response: 01:33:27.245 { 01:33:27.245 "code": -17, 01:33:27.245 "message": "Failed to create RAID bdev raid_bdev1: File exists" 01:33:27.245 } 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.245 [2024-12-09 05:28:18.846366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:33:27.245 [2024-12-09 05:28:18.846522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:27.245 [2024-12-09 05:28:18.846558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 01:33:27.245 [2024-12-09 05:28:18.846578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:27.245 [2024-12-09 05:28:18.849414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:27.245 [2024-12-09 05:28:18.849457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:33:27.245 [2024-12-09 05:28:18.849539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:33:27.245 [2024-12-09 05:28:18.849618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:33:27.245 pt1 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.245 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:27.503 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:27.503 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:27.503 "name": "raid_bdev1", 01:33:27.503 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:27.503 "strip_size_kb": 0, 01:33:27.503 "state": "configuring", 01:33:27.503 "raid_level": "raid1", 01:33:27.503 "superblock": true, 01:33:27.503 "num_base_bdevs": 2, 01:33:27.503 "num_base_bdevs_discovered": 1, 01:33:27.503 "num_base_bdevs_operational": 2, 01:33:27.503 "base_bdevs_list": [ 01:33:27.503 { 01:33:27.503 "name": "pt1", 01:33:27.503 "uuid": "00000000-0000-0000-0000-000000000001", 01:33:27.503 "is_configured": true, 01:33:27.503 "data_offset": 256, 01:33:27.503 "data_size": 7936 01:33:27.503 }, 01:33:27.503 { 01:33:27.503 "name": null, 01:33:27.503 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:27.503 "is_configured": false, 01:33:27.503 "data_offset": 256, 01:33:27.503 "data_size": 7936 01:33:27.503 } 01:33:27.503 ] 01:33:27.503 }' 01:33:27.503 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:27.503 05:28:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.763 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 01:33:27.763 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 01:33:27.763 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:33:27.763 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:33:27.763 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:27.763 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:27.763 [2024-12-09 05:28:19.378381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:33:28.022 [2024-12-09 05:28:19.378757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:28.022 [2024-12-09 05:28:19.378816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:33:28.022 [2024-12-09 05:28:19.378836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:28.022 [2024-12-09 05:28:19.379049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:28.022 [2024-12-09 05:28:19.379087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:33:28.022 [2024-12-09 05:28:19.379191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:33:28.023 [2024-12-09 05:28:19.379225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:33:28.023 [2024-12-09 05:28:19.379348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 01:33:28.023 [2024-12-09 05:28:19.379372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:28.023 [2024-12-09 05:28:19.379523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:33:28.023 [2024-12-09 05:28:19.379616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 01:33:28.023 [2024-12-09 05:28:19.379630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 01:33:28.023 [2024-12-09 05:28:19.379751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:28.023 pt2 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:28.023 "name": "raid_bdev1", 01:33:28.023 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:28.023 "strip_size_kb": 0, 01:33:28.023 "state": "online", 01:33:28.023 "raid_level": "raid1", 01:33:28.023 "superblock": true, 01:33:28.023 "num_base_bdevs": 2, 01:33:28.023 "num_base_bdevs_discovered": 2, 01:33:28.023 "num_base_bdevs_operational": 2, 01:33:28.023 "base_bdevs_list": [ 01:33:28.023 { 01:33:28.023 "name": "pt1", 01:33:28.023 "uuid": "00000000-0000-0000-0000-000000000001", 01:33:28.023 "is_configured": true, 01:33:28.023 "data_offset": 256, 01:33:28.023 "data_size": 7936 01:33:28.023 }, 01:33:28.023 { 01:33:28.023 "name": "pt2", 01:33:28.023 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:28.023 "is_configured": true, 01:33:28.023 "data_offset": 256, 01:33:28.023 "data_size": 7936 01:33:28.023 } 01:33:28.023 ] 01:33:28.023 }' 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:28.023 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.285 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 01:33:28.285 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 01:33:28.285 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 01:33:28.286 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 01:33:28.286 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 01:33:28.286 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 01:33:28.286 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:33:28.286 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 01:33:28.286 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.548 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.548 [2024-12-09 05:28:19.902985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:28.548 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.548 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 01:33:28.548 "name": "raid_bdev1", 01:33:28.548 "aliases": [ 01:33:28.548 "7ff9a811-d371-4610-bbd8-64c3bafa9562" 01:33:28.548 ], 01:33:28.548 "product_name": "Raid Volume", 01:33:28.548 "block_size": 4128, 01:33:28.548 "num_blocks": 7936, 01:33:28.548 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:28.548 "md_size": 32, 01:33:28.548 "md_interleave": true, 01:33:28.548 "dif_type": 0, 01:33:28.548 "assigned_rate_limits": { 01:33:28.548 "rw_ios_per_sec": 0, 01:33:28.548 "rw_mbytes_per_sec": 0, 01:33:28.548 "r_mbytes_per_sec": 0, 01:33:28.548 "w_mbytes_per_sec": 0 01:33:28.548 }, 01:33:28.548 "claimed": false, 01:33:28.548 "zoned": false, 01:33:28.548 "supported_io_types": { 01:33:28.548 "read": true, 01:33:28.548 "write": true, 01:33:28.548 "unmap": false, 01:33:28.548 "flush": false, 01:33:28.548 "reset": true, 01:33:28.548 "nvme_admin": false, 01:33:28.548 "nvme_io": false, 01:33:28.548 "nvme_io_md": false, 01:33:28.548 "write_zeroes": true, 01:33:28.548 "zcopy": false, 01:33:28.548 "get_zone_info": false, 01:33:28.548 "zone_management": false, 01:33:28.548 "zone_append": false, 01:33:28.548 "compare": false, 01:33:28.548 "compare_and_write": false, 01:33:28.548 "abort": false, 01:33:28.548 "seek_hole": false, 01:33:28.548 "seek_data": false, 01:33:28.548 "copy": false, 01:33:28.548 "nvme_iov_md": false 01:33:28.548 }, 01:33:28.548 "memory_domains": [ 01:33:28.548 { 01:33:28.548 "dma_device_id": "system", 01:33:28.548 "dma_device_type": 1 01:33:28.548 }, 01:33:28.548 { 01:33:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:28.548 "dma_device_type": 2 01:33:28.548 }, 01:33:28.548 { 01:33:28.548 "dma_device_id": "system", 01:33:28.548 "dma_device_type": 1 01:33:28.548 }, 01:33:28.548 { 01:33:28.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:33:28.548 "dma_device_type": 2 01:33:28.548 } 01:33:28.548 ], 01:33:28.548 "driver_specific": { 01:33:28.548 "raid": { 01:33:28.548 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:28.548 "strip_size_kb": 0, 01:33:28.548 "state": "online", 01:33:28.548 "raid_level": "raid1", 01:33:28.548 "superblock": true, 01:33:28.548 "num_base_bdevs": 2, 01:33:28.548 "num_base_bdevs_discovered": 2, 01:33:28.548 "num_base_bdevs_operational": 2, 01:33:28.548 "base_bdevs_list": [ 01:33:28.548 { 01:33:28.548 "name": "pt1", 01:33:28.548 "uuid": "00000000-0000-0000-0000-000000000001", 01:33:28.548 "is_configured": true, 01:33:28.548 "data_offset": 256, 01:33:28.548 "data_size": 7936 01:33:28.548 }, 01:33:28.548 { 01:33:28.548 "name": "pt2", 01:33:28.548 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:28.548 "is_configured": true, 01:33:28.548 "data_offset": 256, 01:33:28.548 "data_size": 7936 01:33:28.548 } 01:33:28.548 ] 01:33:28.548 } 01:33:28.548 } 01:33:28.548 }' 01:33:28.548 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 01:33:28.548 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 01:33:28.548 pt2' 01:33:28.548 05:28:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 01:33:28.548 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.548 [2024-12-09 05:28:20.155160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7ff9a811-d371-4610-bbd8-64c3bafa9562 '!=' 7ff9a811-d371-4610-bbd8-64c3bafa9562 ']' 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.807 [2024-12-09 05:28:20.206869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:28.807 "name": "raid_bdev1", 01:33:28.807 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:28.807 "strip_size_kb": 0, 01:33:28.807 "state": "online", 01:33:28.807 "raid_level": "raid1", 01:33:28.807 "superblock": true, 01:33:28.807 "num_base_bdevs": 2, 01:33:28.807 "num_base_bdevs_discovered": 1, 01:33:28.807 "num_base_bdevs_operational": 1, 01:33:28.807 "base_bdevs_list": [ 01:33:28.807 { 01:33:28.807 "name": null, 01:33:28.807 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:28.807 "is_configured": false, 01:33:28.807 "data_offset": 0, 01:33:28.807 "data_size": 7936 01:33:28.807 }, 01:33:28.807 { 01:33:28.807 "name": "pt2", 01:33:28.807 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:28.807 "is_configured": true, 01:33:28.807 "data_offset": 256, 01:33:28.807 "data_size": 7936 01:33:28.807 } 01:33:28.807 ] 01:33:28.807 }' 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:28.807 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.375 [2024-12-09 05:28:20.718894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:29.375 [2024-12-09 05:28:20.718964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:33:29.375 [2024-12-09 05:28:20.719079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:29.375 [2024-12-09 05:28:20.719150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:29.375 [2024-12-09 05:28:20.719170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.375 [2024-12-09 05:28:20.794855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 01:33:29.375 [2024-12-09 05:28:20.794934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:29.375 [2024-12-09 05:28:20.794963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 01:33:29.375 [2024-12-09 05:28:20.794981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:29.375 [2024-12-09 05:28:20.797762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:29.375 [2024-12-09 05:28:20.798113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 01:33:29.375 [2024-12-09 05:28:20.798212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 01:33:29.375 [2024-12-09 05:28:20.798283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:33:29.375 [2024-12-09 05:28:20.798430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 01:33:29.375 [2024-12-09 05:28:20.798453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:29.375 [2024-12-09 05:28:20.798605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:33:29.375 [2024-12-09 05:28:20.798700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 01:33:29.375 [2024-12-09 05:28:20.798714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 01:33:29.375 [2024-12-09 05:28:20.798840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:29.375 pt2 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:29.375 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:29.376 "name": "raid_bdev1", 01:33:29.376 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:29.376 "strip_size_kb": 0, 01:33:29.376 "state": "online", 01:33:29.376 "raid_level": "raid1", 01:33:29.376 "superblock": true, 01:33:29.376 "num_base_bdevs": 2, 01:33:29.376 "num_base_bdevs_discovered": 1, 01:33:29.376 "num_base_bdevs_operational": 1, 01:33:29.376 "base_bdevs_list": [ 01:33:29.376 { 01:33:29.376 "name": null, 01:33:29.376 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:29.376 "is_configured": false, 01:33:29.376 "data_offset": 256, 01:33:29.376 "data_size": 7936 01:33:29.376 }, 01:33:29.376 { 01:33:29.376 "name": "pt2", 01:33:29.376 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:29.376 "is_configured": true, 01:33:29.376 "data_offset": 256, 01:33:29.376 "data_size": 7936 01:33:29.376 } 01:33:29.376 ] 01:33:29.376 }' 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:29.376 05:28:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.943 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:33:29.943 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.943 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.943 [2024-12-09 05:28:21.303039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:29.944 [2024-12-09 05:28:21.303105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:33:29.944 [2024-12-09 05:28:21.303211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:29.944 [2024-12-09 05:28:21.303290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:29.944 [2024-12-09 05:28:21.303307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.944 [2024-12-09 05:28:21.371088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 01:33:29.944 [2024-12-09 05:28:21.371319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:29.944 [2024-12-09 05:28:21.371412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 01:33:29.944 [2024-12-09 05:28:21.371690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:29.944 [2024-12-09 05:28:21.374742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:29.944 [2024-12-09 05:28:21.374789] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 01:33:29.944 [2024-12-09 05:28:21.374871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 01:33:29.944 [2024-12-09 05:28:21.374941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 01:33:29.944 [2024-12-09 05:28:21.375121] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 01:33:29.944 [2024-12-09 05:28:21.375138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:29.944 [2024-12-09 05:28:21.375159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 01:33:29.944 [2024-12-09 05:28:21.375222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 01:33:29.944 [2024-12-09 05:28:21.375386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 01:33:29.944 [2024-12-09 05:28:21.375402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:29.944 pt1 01:33:29.944 [2024-12-09 05:28:21.375486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:33:29.944 [2024-12-09 05:28:21.375575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 01:33:29.944 [2024-12-09 05:28:21.375593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 01:33:29.944 [2024-12-09 05:28:21.375723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:29.944 "name": "raid_bdev1", 01:33:29.944 "uuid": "7ff9a811-d371-4610-bbd8-64c3bafa9562", 01:33:29.944 "strip_size_kb": 0, 01:33:29.944 "state": "online", 01:33:29.944 "raid_level": "raid1", 01:33:29.944 "superblock": true, 01:33:29.944 "num_base_bdevs": 2, 01:33:29.944 "num_base_bdevs_discovered": 1, 01:33:29.944 "num_base_bdevs_operational": 1, 01:33:29.944 "base_bdevs_list": [ 01:33:29.944 { 01:33:29.944 "name": null, 01:33:29.944 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:29.944 "is_configured": false, 01:33:29.944 "data_offset": 256, 01:33:29.944 "data_size": 7936 01:33:29.944 }, 01:33:29.944 { 01:33:29.944 "name": "pt2", 01:33:29.944 "uuid": "00000000-0000-0000-0000-000000000002", 01:33:29.944 "is_configured": true, 01:33:29.944 "data_offset": 256, 01:33:29.944 "data_size": 7936 01:33:29.944 } 01:33:29.944 ] 01:33:29.944 }' 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:29.944 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:30.512 [2024-12-09 05:28:21.959633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:30.512 05:28:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7ff9a811-d371-4610-bbd8-64c3bafa9562 '!=' 7ff9a811-d371-4610-bbd8-64c3bafa9562 ']' 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89116 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89116 ']' 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89116 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89116 01:33:30.512 killing process with pid 89116 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89116' 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89116 01:33:30.512 [2024-12-09 05:28:22.038912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:33:30.512 05:28:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89116 01:33:30.512 [2024-12-09 05:28:22.039000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:30.512 [2024-12-09 05:28:22.039058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:30.512 [2024-12-09 05:28:22.039079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 01:33:30.770 [2024-12-09 05:28:22.218668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:33:32.148 05:28:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 01:33:32.148 01:33:32.148 real 0m6.741s 01:33:32.148 user 0m10.515s 01:33:32.148 sys 0m1.014s 01:33:32.148 05:28:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:32.148 05:28:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:32.148 ************************************ 01:33:32.148 END TEST raid_superblock_test_md_interleaved 01:33:32.148 ************************************ 01:33:32.148 05:28:23 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 01:33:32.148 05:28:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:33:32.148 05:28:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:32.148 05:28:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:33:32.148 ************************************ 01:33:32.148 START TEST raid_rebuild_test_sb_md_interleaved 01:33:32.148 ************************************ 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89450 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89450 01:33:32.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89450 ']' 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:32.148 05:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:32.148 [2024-12-09 05:28:23.604082] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:32.148 [2024-12-09 05:28:23.604522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89450 ] 01:33:32.149 I/O size of 3145728 is greater than zero copy threshold (65536). 01:33:32.149 Zero copy mechanism will not be used. 01:33:32.407 [2024-12-09 05:28:23.797158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:32.407 [2024-12-09 05:28:23.963604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:32.667 [2024-12-09 05:28:24.191780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:33:32.667 [2024-12-09 05:28:24.191881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 01:33:32.926 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:32.926 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 01:33:32.926 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:33:32.926 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 01:33:32.926 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:32.926 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 BaseBdev1_malloc 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 [2024-12-09 05:28:24.559260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:33:33.186 [2024-12-09 05:28:24.559367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:33.186 [2024-12-09 05:28:24.559405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 01:33:33.186 [2024-12-09 05:28:24.559426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:33.186 [2024-12-09 05:28:24.562060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:33.186 [2024-12-09 05:28:24.562126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:33:33.186 BaseBdev1 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 BaseBdev2_malloc 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 [2024-12-09 05:28:24.611228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 01:33:33.186 [2024-12-09 05:28:24.611305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:33.186 [2024-12-09 05:28:24.611336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 01:33:33.186 [2024-12-09 05:28:24.611370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:33.186 [2024-12-09 05:28:24.613936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:33.186 [2024-12-09 05:28:24.613982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 01:33:33.186 BaseBdev2 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 spare_malloc 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 spare_delay 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 [2024-12-09 05:28:24.679680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:33:33.186 [2024-12-09 05:28:24.680081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:33.186 [2024-12-09 05:28:24.680128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 01:33:33.186 [2024-12-09 05:28:24.680150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:33.186 [2024-12-09 05:28:24.682766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:33.186 [2024-12-09 05:28:24.682811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:33:33.186 spare 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 [2024-12-09 05:28:24.687741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:33:33.186 [2024-12-09 05:28:24.690403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:33:33.186 [2024-12-09 05:28:24.690824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 01:33:33.186 [2024-12-09 05:28:24.690853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:33.186 [2024-12-09 05:28:24.690995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 01:33:33.186 [2024-12-09 05:28:24.691097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 01:33:33.186 [2024-12-09 05:28:24.691112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 01:33:33.186 [2024-12-09 05:28:24.691206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.186 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.187 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:33.187 "name": "raid_bdev1", 01:33:33.187 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:33.187 "strip_size_kb": 0, 01:33:33.187 "state": "online", 01:33:33.187 "raid_level": "raid1", 01:33:33.187 "superblock": true, 01:33:33.187 "num_base_bdevs": 2, 01:33:33.187 "num_base_bdevs_discovered": 2, 01:33:33.187 "num_base_bdevs_operational": 2, 01:33:33.187 "base_bdevs_list": [ 01:33:33.187 { 01:33:33.187 "name": "BaseBdev1", 01:33:33.187 "uuid": "197fc9e8-573c-5084-9079-663196b7243e", 01:33:33.187 "is_configured": true, 01:33:33.187 "data_offset": 256, 01:33:33.187 "data_size": 7936 01:33:33.187 }, 01:33:33.187 { 01:33:33.187 "name": "BaseBdev2", 01:33:33.187 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:33.187 "is_configured": true, 01:33:33.187 "data_offset": 256, 01:33:33.187 "data_size": 7936 01:33:33.187 } 01:33:33.187 ] 01:33:33.187 }' 01:33:33.187 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:33.187 05:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.753 [2024-12-09 05:28:25.196322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 01:33:33.753 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.754 [2024-12-09 05:28:25.295959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:33.754 "name": "raid_bdev1", 01:33:33.754 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:33.754 "strip_size_kb": 0, 01:33:33.754 "state": "online", 01:33:33.754 "raid_level": "raid1", 01:33:33.754 "superblock": true, 01:33:33.754 "num_base_bdevs": 2, 01:33:33.754 "num_base_bdevs_discovered": 1, 01:33:33.754 "num_base_bdevs_operational": 1, 01:33:33.754 "base_bdevs_list": [ 01:33:33.754 { 01:33:33.754 "name": null, 01:33:33.754 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:33.754 "is_configured": false, 01:33:33.754 "data_offset": 0, 01:33:33.754 "data_size": 7936 01:33:33.754 }, 01:33:33.754 { 01:33:33.754 "name": "BaseBdev2", 01:33:33.754 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:33.754 "is_configured": true, 01:33:33.754 "data_offset": 256, 01:33:33.754 "data_size": 7936 01:33:33.754 } 01:33:33.754 ] 01:33:33.754 }' 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:33.754 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:34.320 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:33:34.320 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:34.320 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:34.320 [2024-12-09 05:28:25.812133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:34.320 [2024-12-09 05:28:25.828725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 01:33:34.320 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:34.320 05:28:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 01:33:34.320 [2024-12-09 05:28:25.831235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:35.256 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:35.515 "name": "raid_bdev1", 01:33:35.515 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:35.515 "strip_size_kb": 0, 01:33:35.515 "state": "online", 01:33:35.515 "raid_level": "raid1", 01:33:35.515 "superblock": true, 01:33:35.515 "num_base_bdevs": 2, 01:33:35.515 "num_base_bdevs_discovered": 2, 01:33:35.515 "num_base_bdevs_operational": 2, 01:33:35.515 "process": { 01:33:35.515 "type": "rebuild", 01:33:35.515 "target": "spare", 01:33:35.515 "progress": { 01:33:35.515 "blocks": 2560, 01:33:35.515 "percent": 32 01:33:35.515 } 01:33:35.515 }, 01:33:35.515 "base_bdevs_list": [ 01:33:35.515 { 01:33:35.515 "name": "spare", 01:33:35.515 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:35.515 "is_configured": true, 01:33:35.515 "data_offset": 256, 01:33:35.515 "data_size": 7936 01:33:35.515 }, 01:33:35.515 { 01:33:35.515 "name": "BaseBdev2", 01:33:35.515 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:35.515 "is_configured": true, 01:33:35.515 "data_offset": 256, 01:33:35.515 "data_size": 7936 01:33:35.515 } 01:33:35.515 ] 01:33:35.515 }' 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:35.515 05:28:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:35.515 [2024-12-09 05:28:26.985219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:35.515 [2024-12-09 05:28:27.043912] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:33:35.515 [2024-12-09 05:28:27.044038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:35.515 [2024-12-09 05:28:27.044062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:35.515 [2024-12-09 05:28:27.044081] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:35.515 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:35.773 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:35.773 "name": "raid_bdev1", 01:33:35.773 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:35.773 "strip_size_kb": 0, 01:33:35.773 "state": "online", 01:33:35.773 "raid_level": "raid1", 01:33:35.773 "superblock": true, 01:33:35.773 "num_base_bdevs": 2, 01:33:35.773 "num_base_bdevs_discovered": 1, 01:33:35.773 "num_base_bdevs_operational": 1, 01:33:35.773 "base_bdevs_list": [ 01:33:35.773 { 01:33:35.773 "name": null, 01:33:35.773 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:35.773 "is_configured": false, 01:33:35.773 "data_offset": 0, 01:33:35.773 "data_size": 7936 01:33:35.773 }, 01:33:35.773 { 01:33:35.773 "name": "BaseBdev2", 01:33:35.773 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:35.773 "is_configured": true, 01:33:35.773 "data_offset": 256, 01:33:35.773 "data_size": 7936 01:33:35.773 } 01:33:35.773 ] 01:33:35.773 }' 01:33:35.773 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:35.773 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:36.063 "name": "raid_bdev1", 01:33:36.063 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:36.063 "strip_size_kb": 0, 01:33:36.063 "state": "online", 01:33:36.063 "raid_level": "raid1", 01:33:36.063 "superblock": true, 01:33:36.063 "num_base_bdevs": 2, 01:33:36.063 "num_base_bdevs_discovered": 1, 01:33:36.063 "num_base_bdevs_operational": 1, 01:33:36.063 "base_bdevs_list": [ 01:33:36.063 { 01:33:36.063 "name": null, 01:33:36.063 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:36.063 "is_configured": false, 01:33:36.063 "data_offset": 0, 01:33:36.063 "data_size": 7936 01:33:36.063 }, 01:33:36.063 { 01:33:36.063 "name": "BaseBdev2", 01:33:36.063 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:36.063 "is_configured": true, 01:33:36.063 "data_offset": 256, 01:33:36.063 "data_size": 7936 01:33:36.063 } 01:33:36.063 ] 01:33:36.063 }' 01:33:36.063 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:36.321 [2024-12-09 05:28:27.752315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:36.321 [2024-12-09 05:28:27.768451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:36.321 05:28:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 01:33:36.321 [2024-12-09 05:28:27.771271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:37.255 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:37.255 "name": "raid_bdev1", 01:33:37.255 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:37.255 "strip_size_kb": 0, 01:33:37.255 "state": "online", 01:33:37.255 "raid_level": "raid1", 01:33:37.255 "superblock": true, 01:33:37.255 "num_base_bdevs": 2, 01:33:37.255 "num_base_bdevs_discovered": 2, 01:33:37.255 "num_base_bdevs_operational": 2, 01:33:37.255 "process": { 01:33:37.255 "type": "rebuild", 01:33:37.255 "target": "spare", 01:33:37.255 "progress": { 01:33:37.255 "blocks": 2560, 01:33:37.255 "percent": 32 01:33:37.255 } 01:33:37.255 }, 01:33:37.255 "base_bdevs_list": [ 01:33:37.255 { 01:33:37.255 "name": "spare", 01:33:37.255 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:37.255 "is_configured": true, 01:33:37.255 "data_offset": 256, 01:33:37.255 "data_size": 7936 01:33:37.255 }, 01:33:37.255 { 01:33:37.255 "name": "BaseBdev2", 01:33:37.255 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:37.255 "is_configured": true, 01:33:37.255 "data_offset": 256, 01:33:37.255 "data_size": 7936 01:33:37.256 } 01:33:37.256 ] 01:33:37.256 }' 01:33:37.256 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:37.256 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:37.256 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 01:33:37.513 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=810 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:37.513 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:37.514 "name": "raid_bdev1", 01:33:37.514 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:37.514 "strip_size_kb": 0, 01:33:37.514 "state": "online", 01:33:37.514 "raid_level": "raid1", 01:33:37.514 "superblock": true, 01:33:37.514 "num_base_bdevs": 2, 01:33:37.514 "num_base_bdevs_discovered": 2, 01:33:37.514 "num_base_bdevs_operational": 2, 01:33:37.514 "process": { 01:33:37.514 "type": "rebuild", 01:33:37.514 "target": "spare", 01:33:37.514 "progress": { 01:33:37.514 "blocks": 2816, 01:33:37.514 "percent": 35 01:33:37.514 } 01:33:37.514 }, 01:33:37.514 "base_bdevs_list": [ 01:33:37.514 { 01:33:37.514 "name": "spare", 01:33:37.514 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:37.514 "is_configured": true, 01:33:37.514 "data_offset": 256, 01:33:37.514 "data_size": 7936 01:33:37.514 }, 01:33:37.514 { 01:33:37.514 "name": "BaseBdev2", 01:33:37.514 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:37.514 "is_configured": true, 01:33:37.514 "data_offset": 256, 01:33:37.514 "data_size": 7936 01:33:37.514 } 01:33:37.514 ] 01:33:37.514 }' 01:33:37.514 05:28:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:37.514 05:28:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:37.514 05:28:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:37.514 05:28:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:37.514 05:28:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 01:33:38.886 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:33:38.886 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:38.886 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:38.886 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:38.887 "name": "raid_bdev1", 01:33:38.887 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:38.887 "strip_size_kb": 0, 01:33:38.887 "state": "online", 01:33:38.887 "raid_level": "raid1", 01:33:38.887 "superblock": true, 01:33:38.887 "num_base_bdevs": 2, 01:33:38.887 "num_base_bdevs_discovered": 2, 01:33:38.887 "num_base_bdevs_operational": 2, 01:33:38.887 "process": { 01:33:38.887 "type": "rebuild", 01:33:38.887 "target": "spare", 01:33:38.887 "progress": { 01:33:38.887 "blocks": 5632, 01:33:38.887 "percent": 70 01:33:38.887 } 01:33:38.887 }, 01:33:38.887 "base_bdevs_list": [ 01:33:38.887 { 01:33:38.887 "name": "spare", 01:33:38.887 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:38.887 "is_configured": true, 01:33:38.887 "data_offset": 256, 01:33:38.887 "data_size": 7936 01:33:38.887 }, 01:33:38.887 { 01:33:38.887 "name": "BaseBdev2", 01:33:38.887 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:38.887 "is_configured": true, 01:33:38.887 "data_offset": 256, 01:33:38.887 "data_size": 7936 01:33:38.887 } 01:33:38.887 ] 01:33:38.887 }' 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:38.887 05:28:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 01:33:39.452 [2024-12-09 05:28:30.899781] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 01:33:39.452 [2024-12-09 05:28:30.899916] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 01:33:39.452 [2024-12-09 05:28:30.900107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:39.710 "name": "raid_bdev1", 01:33:39.710 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:39.710 "strip_size_kb": 0, 01:33:39.710 "state": "online", 01:33:39.710 "raid_level": "raid1", 01:33:39.710 "superblock": true, 01:33:39.710 "num_base_bdevs": 2, 01:33:39.710 "num_base_bdevs_discovered": 2, 01:33:39.710 "num_base_bdevs_operational": 2, 01:33:39.710 "base_bdevs_list": [ 01:33:39.710 { 01:33:39.710 "name": "spare", 01:33:39.710 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:39.710 "is_configured": true, 01:33:39.710 "data_offset": 256, 01:33:39.710 "data_size": 7936 01:33:39.710 }, 01:33:39.710 { 01:33:39.710 "name": "BaseBdev2", 01:33:39.710 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:39.710 "is_configured": true, 01:33:39.710 "data_offset": 256, 01:33:39.710 "data_size": 7936 01:33:39.710 } 01:33:39.710 ] 01:33:39.710 }' 01:33:39.710 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:39.968 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:39.969 "name": "raid_bdev1", 01:33:39.969 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:39.969 "strip_size_kb": 0, 01:33:39.969 "state": "online", 01:33:39.969 "raid_level": "raid1", 01:33:39.969 "superblock": true, 01:33:39.969 "num_base_bdevs": 2, 01:33:39.969 "num_base_bdevs_discovered": 2, 01:33:39.969 "num_base_bdevs_operational": 2, 01:33:39.969 "base_bdevs_list": [ 01:33:39.969 { 01:33:39.969 "name": "spare", 01:33:39.969 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:39.969 "is_configured": true, 01:33:39.969 "data_offset": 256, 01:33:39.969 "data_size": 7936 01:33:39.969 }, 01:33:39.969 { 01:33:39.969 "name": "BaseBdev2", 01:33:39.969 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:39.969 "is_configured": true, 01:33:39.969 "data_offset": 256, 01:33:39.969 "data_size": 7936 01:33:39.969 } 01:33:39.969 ] 01:33:39.969 }' 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:39.969 "name": "raid_bdev1", 01:33:39.969 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:39.969 "strip_size_kb": 0, 01:33:39.969 "state": "online", 01:33:39.969 "raid_level": "raid1", 01:33:39.969 "superblock": true, 01:33:39.969 "num_base_bdevs": 2, 01:33:39.969 "num_base_bdevs_discovered": 2, 01:33:39.969 "num_base_bdevs_operational": 2, 01:33:39.969 "base_bdevs_list": [ 01:33:39.969 { 01:33:39.969 "name": "spare", 01:33:39.969 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:39.969 "is_configured": true, 01:33:39.969 "data_offset": 256, 01:33:39.969 "data_size": 7936 01:33:39.969 }, 01:33:39.969 { 01:33:39.969 "name": "BaseBdev2", 01:33:39.969 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:39.969 "is_configured": true, 01:33:39.969 "data_offset": 256, 01:33:39.969 "data_size": 7936 01:33:39.969 } 01:33:39.969 ] 01:33:39.969 }' 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:39.969 05:28:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.537 [2024-12-09 05:28:32.034450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 01:33:40.537 [2024-12-09 05:28:32.034515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 01:33:40.537 [2024-12-09 05:28:32.034659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:40.537 [2024-12-09 05:28:32.034779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:40.537 [2024-12-09 05:28:32.034796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.537 [2024-12-09 05:28:32.102422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:33:40.537 [2024-12-09 05:28:32.102530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:40.537 [2024-12-09 05:28:32.102568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 01:33:40.537 [2024-12-09 05:28:32.102586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:40.537 [2024-12-09 05:28:32.105529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:40.537 [2024-12-09 05:28:32.105570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:33:40.537 [2024-12-09 05:28:32.105648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:33:40.537 [2024-12-09 05:28:32.105712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:40.537 [2024-12-09 05:28:32.105888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 01:33:40.537 spare 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:40.537 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.796 [2024-12-09 05:28:32.206013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 01:33:40.796 [2024-12-09 05:28:32.206049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 01:33:40.796 [2024-12-09 05:28:32.206204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 01:33:40.796 [2024-12-09 05:28:32.206322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 01:33:40.796 [2024-12-09 05:28:32.206340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 01:33:40.796 [2024-12-09 05:28:32.206513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:40.796 "name": "raid_bdev1", 01:33:40.796 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:40.796 "strip_size_kb": 0, 01:33:40.796 "state": "online", 01:33:40.796 "raid_level": "raid1", 01:33:40.796 "superblock": true, 01:33:40.796 "num_base_bdevs": 2, 01:33:40.796 "num_base_bdevs_discovered": 2, 01:33:40.796 "num_base_bdevs_operational": 2, 01:33:40.796 "base_bdevs_list": [ 01:33:40.796 { 01:33:40.796 "name": "spare", 01:33:40.796 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:40.796 "is_configured": true, 01:33:40.796 "data_offset": 256, 01:33:40.796 "data_size": 7936 01:33:40.796 }, 01:33:40.796 { 01:33:40.796 "name": "BaseBdev2", 01:33:40.796 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:40.796 "is_configured": true, 01:33:40.796 "data_offset": 256, 01:33:40.796 "data_size": 7936 01:33:40.796 } 01:33:40.796 ] 01:33:40.796 }' 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:40.796 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:41.363 "name": "raid_bdev1", 01:33:41.363 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:41.363 "strip_size_kb": 0, 01:33:41.363 "state": "online", 01:33:41.363 "raid_level": "raid1", 01:33:41.363 "superblock": true, 01:33:41.363 "num_base_bdevs": 2, 01:33:41.363 "num_base_bdevs_discovered": 2, 01:33:41.363 "num_base_bdevs_operational": 2, 01:33:41.363 "base_bdevs_list": [ 01:33:41.363 { 01:33:41.363 "name": "spare", 01:33:41.363 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:41.363 "is_configured": true, 01:33:41.363 "data_offset": 256, 01:33:41.363 "data_size": 7936 01:33:41.363 }, 01:33:41.363 { 01:33:41.363 "name": "BaseBdev2", 01:33:41.363 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:41.363 "is_configured": true, 01:33:41.363 "data_offset": 256, 01:33:41.363 "data_size": 7936 01:33:41.363 } 01:33:41.363 ] 01:33:41.363 }' 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:41.363 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.364 [2024-12-09 05:28:32.890796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:41.364 "name": "raid_bdev1", 01:33:41.364 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:41.364 "strip_size_kb": 0, 01:33:41.364 "state": "online", 01:33:41.364 "raid_level": "raid1", 01:33:41.364 "superblock": true, 01:33:41.364 "num_base_bdevs": 2, 01:33:41.364 "num_base_bdevs_discovered": 1, 01:33:41.364 "num_base_bdevs_operational": 1, 01:33:41.364 "base_bdevs_list": [ 01:33:41.364 { 01:33:41.364 "name": null, 01:33:41.364 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:41.364 "is_configured": false, 01:33:41.364 "data_offset": 0, 01:33:41.364 "data_size": 7936 01:33:41.364 }, 01:33:41.364 { 01:33:41.364 "name": "BaseBdev2", 01:33:41.364 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:41.364 "is_configured": true, 01:33:41.364 "data_offset": 256, 01:33:41.364 "data_size": 7936 01:33:41.364 } 01:33:41.364 ] 01:33:41.364 }' 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:41.364 05:28:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.931 05:28:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 01:33:41.931 05:28:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:41.931 05:28:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:41.931 [2024-12-09 05:28:33.390914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:41.931 [2024-12-09 05:28:33.391121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:33:41.931 [2024-12-09 05:28:33.391147] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:33:41.931 [2024-12-09 05:28:33.391199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:41.931 [2024-12-09 05:28:33.406569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 01:33:41.931 05:28:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:41.931 05:28:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 01:33:41.931 [2024-12-09 05:28:33.408907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:42.865 "name": "raid_bdev1", 01:33:42.865 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:42.865 "strip_size_kb": 0, 01:33:42.865 "state": "online", 01:33:42.865 "raid_level": "raid1", 01:33:42.865 "superblock": true, 01:33:42.865 "num_base_bdevs": 2, 01:33:42.865 "num_base_bdevs_discovered": 2, 01:33:42.865 "num_base_bdevs_operational": 2, 01:33:42.865 "process": { 01:33:42.865 "type": "rebuild", 01:33:42.865 "target": "spare", 01:33:42.865 "progress": { 01:33:42.865 "blocks": 2560, 01:33:42.865 "percent": 32 01:33:42.865 } 01:33:42.865 }, 01:33:42.865 "base_bdevs_list": [ 01:33:42.865 { 01:33:42.865 "name": "spare", 01:33:42.865 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:42.865 "is_configured": true, 01:33:42.865 "data_offset": 256, 01:33:42.865 "data_size": 7936 01:33:42.865 }, 01:33:42.865 { 01:33:42.865 "name": "BaseBdev2", 01:33:42.865 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:42.865 "is_configured": true, 01:33:42.865 "data_offset": 256, 01:33:42.865 "data_size": 7936 01:33:42.865 } 01:33:42.865 ] 01:33:42.865 }' 01:33:42.865 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:43.122 [2024-12-09 05:28:34.570695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:43.122 [2024-12-09 05:28:34.620375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:33:43.122 [2024-12-09 05:28:34.620499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:43.122 [2024-12-09 05:28:34.620538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:43.122 [2024-12-09 05:28:34.620553] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:43.122 "name": "raid_bdev1", 01:33:43.122 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:43.122 "strip_size_kb": 0, 01:33:43.122 "state": "online", 01:33:43.122 "raid_level": "raid1", 01:33:43.122 "superblock": true, 01:33:43.122 "num_base_bdevs": 2, 01:33:43.122 "num_base_bdevs_discovered": 1, 01:33:43.122 "num_base_bdevs_operational": 1, 01:33:43.122 "base_bdevs_list": [ 01:33:43.122 { 01:33:43.122 "name": null, 01:33:43.122 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:43.122 "is_configured": false, 01:33:43.122 "data_offset": 0, 01:33:43.122 "data_size": 7936 01:33:43.122 }, 01:33:43.122 { 01:33:43.122 "name": "BaseBdev2", 01:33:43.122 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:43.122 "is_configured": true, 01:33:43.122 "data_offset": 256, 01:33:43.122 "data_size": 7936 01:33:43.122 } 01:33:43.122 ] 01:33:43.122 }' 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:43.122 05:28:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:43.689 05:28:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 01:33:43.689 05:28:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:43.689 05:28:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:43.689 [2024-12-09 05:28:35.167764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 01:33:43.689 [2024-12-09 05:28:35.167912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:43.689 [2024-12-09 05:28:35.167956] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 01:33:43.689 [2024-12-09 05:28:35.167978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:43.689 [2024-12-09 05:28:35.168279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:43.689 [2024-12-09 05:28:35.168325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 01:33:43.689 [2024-12-09 05:28:35.168438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 01:33:43.689 [2024-12-09 05:28:35.168463] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 01:33:43.689 [2024-12-09 05:28:35.168478] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 01:33:43.689 [2024-12-09 05:28:35.168509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 01:33:43.689 [2024-12-09 05:28:35.185210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 01:33:43.689 spare 01:33:43.689 05:28:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:43.689 05:28:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 01:33:43.689 [2024-12-09 05:28:35.187947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:44.657 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:44.657 "name": "raid_bdev1", 01:33:44.657 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:44.657 "strip_size_kb": 0, 01:33:44.657 "state": "online", 01:33:44.657 "raid_level": "raid1", 01:33:44.657 "superblock": true, 01:33:44.657 "num_base_bdevs": 2, 01:33:44.657 "num_base_bdevs_discovered": 2, 01:33:44.657 "num_base_bdevs_operational": 2, 01:33:44.657 "process": { 01:33:44.657 "type": "rebuild", 01:33:44.657 "target": "spare", 01:33:44.657 "progress": { 01:33:44.657 "blocks": 2560, 01:33:44.657 "percent": 32 01:33:44.657 } 01:33:44.657 }, 01:33:44.657 "base_bdevs_list": [ 01:33:44.657 { 01:33:44.657 "name": "spare", 01:33:44.657 "uuid": "07f14b81-3a11-5699-b8a6-23a79f2240fb", 01:33:44.657 "is_configured": true, 01:33:44.657 "data_offset": 256, 01:33:44.657 "data_size": 7936 01:33:44.657 }, 01:33:44.657 { 01:33:44.657 "name": "BaseBdev2", 01:33:44.657 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:44.657 "is_configured": true, 01:33:44.658 "data_offset": 256, 01:33:44.658 "data_size": 7936 01:33:44.658 } 01:33:44.658 ] 01:33:44.658 }' 01:33:44.658 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:44.935 [2024-12-09 05:28:36.353931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:44.935 [2024-12-09 05:28:36.399661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 01:33:44.935 [2024-12-09 05:28:36.399755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 01:33:44.935 [2024-12-09 05:28:36.399783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 01:33:44.935 [2024-12-09 05:28:36.399795] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:44.935 "name": "raid_bdev1", 01:33:44.935 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:44.935 "strip_size_kb": 0, 01:33:44.935 "state": "online", 01:33:44.935 "raid_level": "raid1", 01:33:44.935 "superblock": true, 01:33:44.935 "num_base_bdevs": 2, 01:33:44.935 "num_base_bdevs_discovered": 1, 01:33:44.935 "num_base_bdevs_operational": 1, 01:33:44.935 "base_bdevs_list": [ 01:33:44.935 { 01:33:44.935 "name": null, 01:33:44.935 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:44.935 "is_configured": false, 01:33:44.935 "data_offset": 0, 01:33:44.935 "data_size": 7936 01:33:44.935 }, 01:33:44.935 { 01:33:44.935 "name": "BaseBdev2", 01:33:44.935 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:44.935 "is_configured": true, 01:33:44.935 "data_offset": 256, 01:33:44.935 "data_size": 7936 01:33:44.935 } 01:33:44.935 ] 01:33:44.935 }' 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:44.935 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:45.500 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:45.500 "name": "raid_bdev1", 01:33:45.500 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:45.500 "strip_size_kb": 0, 01:33:45.500 "state": "online", 01:33:45.500 "raid_level": "raid1", 01:33:45.500 "superblock": true, 01:33:45.500 "num_base_bdevs": 2, 01:33:45.500 "num_base_bdevs_discovered": 1, 01:33:45.500 "num_base_bdevs_operational": 1, 01:33:45.500 "base_bdevs_list": [ 01:33:45.500 { 01:33:45.500 "name": null, 01:33:45.500 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:45.500 "is_configured": false, 01:33:45.500 "data_offset": 0, 01:33:45.500 "data_size": 7936 01:33:45.500 }, 01:33:45.500 { 01:33:45.500 "name": "BaseBdev2", 01:33:45.500 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:45.500 "is_configured": true, 01:33:45.500 "data_offset": 256, 01:33:45.501 "data_size": 7936 01:33:45.501 } 01:33:45.501 ] 01:33:45.501 }' 01:33:45.501 05:28:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:45.501 [2024-12-09 05:28:37.107877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 01:33:45.501 [2024-12-09 05:28:37.107969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:33:45.501 [2024-12-09 05:28:37.108004] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 01:33:45.501 [2024-12-09 05:28:37.108020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:33:45.501 [2024-12-09 05:28:37.108293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:33:45.501 [2024-12-09 05:28:37.108324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 01:33:45.501 [2024-12-09 05:28:37.108411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 01:33:45.501 [2024-12-09 05:28:37.108433] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:33:45.501 [2024-12-09 05:28:37.108463] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:33:45.501 [2024-12-09 05:28:37.108494] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 01:33:45.501 BaseBdev1 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:45.501 05:28:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:46.876 "name": "raid_bdev1", 01:33:46.876 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:46.876 "strip_size_kb": 0, 01:33:46.876 "state": "online", 01:33:46.876 "raid_level": "raid1", 01:33:46.876 "superblock": true, 01:33:46.876 "num_base_bdevs": 2, 01:33:46.876 "num_base_bdevs_discovered": 1, 01:33:46.876 "num_base_bdevs_operational": 1, 01:33:46.876 "base_bdevs_list": [ 01:33:46.876 { 01:33:46.876 "name": null, 01:33:46.876 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:46.876 "is_configured": false, 01:33:46.876 "data_offset": 0, 01:33:46.876 "data_size": 7936 01:33:46.876 }, 01:33:46.876 { 01:33:46.876 "name": "BaseBdev2", 01:33:46.876 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:46.876 "is_configured": true, 01:33:46.876 "data_offset": 256, 01:33:46.876 "data_size": 7936 01:33:46.876 } 01:33:46.876 ] 01:33:46.876 }' 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:46.876 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:47.135 "name": "raid_bdev1", 01:33:47.135 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:47.135 "strip_size_kb": 0, 01:33:47.135 "state": "online", 01:33:47.135 "raid_level": "raid1", 01:33:47.135 "superblock": true, 01:33:47.135 "num_base_bdevs": 2, 01:33:47.135 "num_base_bdevs_discovered": 1, 01:33:47.135 "num_base_bdevs_operational": 1, 01:33:47.135 "base_bdevs_list": [ 01:33:47.135 { 01:33:47.135 "name": null, 01:33:47.135 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:47.135 "is_configured": false, 01:33:47.135 "data_offset": 0, 01:33:47.135 "data_size": 7936 01:33:47.135 }, 01:33:47.135 { 01:33:47.135 "name": "BaseBdev2", 01:33:47.135 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:47.135 "is_configured": true, 01:33:47.135 "data_offset": 256, 01:33:47.135 "data_size": 7936 01:33:47.135 } 01:33:47.135 ] 01:33:47.135 }' 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:47.135 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:47.394 [2024-12-09 05:28:38.784532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 01:33:47.394 [2024-12-09 05:28:38.784859] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 01:33:47.394 [2024-12-09 05:28:38.784890] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 01:33:47.394 request: 01:33:47.394 { 01:33:47.394 "base_bdev": "BaseBdev1", 01:33:47.394 "raid_bdev": "raid_bdev1", 01:33:47.394 "method": "bdev_raid_add_base_bdev", 01:33:47.394 "req_id": 1 01:33:47.394 } 01:33:47.394 Got JSON-RPC error response 01:33:47.394 response: 01:33:47.394 { 01:33:47.394 "code": -22, 01:33:47.394 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 01:33:47.394 } 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:33:47.394 05:28:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 01:33:48.331 "name": "raid_bdev1", 01:33:48.331 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:48.331 "strip_size_kb": 0, 01:33:48.331 "state": "online", 01:33:48.331 "raid_level": "raid1", 01:33:48.331 "superblock": true, 01:33:48.331 "num_base_bdevs": 2, 01:33:48.331 "num_base_bdevs_discovered": 1, 01:33:48.331 "num_base_bdevs_operational": 1, 01:33:48.331 "base_bdevs_list": [ 01:33:48.331 { 01:33:48.331 "name": null, 01:33:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:48.331 "is_configured": false, 01:33:48.331 "data_offset": 0, 01:33:48.331 "data_size": 7936 01:33:48.331 }, 01:33:48.331 { 01:33:48.331 "name": "BaseBdev2", 01:33:48.331 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:48.331 "is_configured": true, 01:33:48.331 "data_offset": 256, 01:33:48.331 "data_size": 7936 01:33:48.331 } 01:33:48.331 ] 01:33:48.331 }' 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 01:33:48.331 05:28:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 01:33:48.899 "name": "raid_bdev1", 01:33:48.899 "uuid": "26e3df44-37b0-47d8-b8f2-5082bf1d56d5", 01:33:48.899 "strip_size_kb": 0, 01:33:48.899 "state": "online", 01:33:48.899 "raid_level": "raid1", 01:33:48.899 "superblock": true, 01:33:48.899 "num_base_bdevs": 2, 01:33:48.899 "num_base_bdevs_discovered": 1, 01:33:48.899 "num_base_bdevs_operational": 1, 01:33:48.899 "base_bdevs_list": [ 01:33:48.899 { 01:33:48.899 "name": null, 01:33:48.899 "uuid": "00000000-0000-0000-0000-000000000000", 01:33:48.899 "is_configured": false, 01:33:48.899 "data_offset": 0, 01:33:48.899 "data_size": 7936 01:33:48.899 }, 01:33:48.899 { 01:33:48.899 "name": "BaseBdev2", 01:33:48.899 "uuid": "f82e9185-499b-5eba-9aa9-d8a2be4d14d4", 01:33:48.899 "is_configured": true, 01:33:48.899 "data_offset": 256, 01:33:48.899 "data_size": 7936 01:33:48.899 } 01:33:48.899 ] 01:33:48.899 }' 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89450 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89450 ']' 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89450 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:48.899 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89450 01:33:48.899 killing process with pid 89450 01:33:48.899 Received shutdown signal, test time was about 60.000000 seconds 01:33:48.899 01:33:48.900 Latency(us) 01:33:48.900 [2024-12-09T05:28:40.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:48.900 [2024-12-09T05:28:40.517Z] =================================================================================================================== 01:33:48.900 [2024-12-09T05:28:40.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:33:48.900 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:48.900 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:48.900 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89450' 01:33:48.900 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89450 01:33:48.900 [2024-12-09 05:28:40.508992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 01:33:48.900 05:28:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89450 01:33:48.900 [2024-12-09 05:28:40.509174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 01:33:48.900 [2024-12-09 05:28:40.509247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 01:33:48.900 [2024-12-09 05:28:40.509266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 01:33:49.159 [2024-12-09 05:28:40.771093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 01:33:50.536 05:28:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 01:33:50.536 01:33:50.536 real 0m18.467s 01:33:50.536 user 0m24.923s 01:33:50.536 sys 0m1.508s 01:33:50.536 05:28:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:50.536 ************************************ 01:33:50.536 END TEST raid_rebuild_test_sb_md_interleaved 01:33:50.536 ************************************ 01:33:50.536 05:28:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 01:33:50.536 05:28:41 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 01:33:50.536 05:28:41 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 01:33:50.536 05:28:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89450 ']' 01:33:50.536 05:28:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89450 01:33:50.536 05:28:41 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 01:33:50.536 01:33:50.536 real 13m13.188s 01:33:50.536 user 18m33.308s 01:33:50.536 sys 1m51.101s 01:33:50.536 05:28:42 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:50.536 05:28:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 01:33:50.536 ************************************ 01:33:50.536 END TEST bdev_raid 01:33:50.536 ************************************ 01:33:50.536 05:28:42 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 01:33:50.536 05:28:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:33:50.536 05:28:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:50.536 05:28:42 -- common/autotest_common.sh@10 -- # set +x 01:33:50.536 ************************************ 01:33:50.536 START TEST spdkcli_raid 01:33:50.536 ************************************ 01:33:50.536 05:28:42 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 01:33:50.536 * Looking for test storage... 01:33:50.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:33:50.536 05:28:42 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:50.536 05:28:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:50.536 05:28:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 01:33:50.796 05:28:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@345 -- # : 1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:50.796 05:28:42 spdkcli_raid -- scripts/common.sh@368 -- # return 0 01:33:50.796 05:28:42 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:50.796 05:28:42 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:50.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:50.796 --rc genhtml_branch_coverage=1 01:33:50.796 --rc genhtml_function_coverage=1 01:33:50.796 --rc genhtml_legend=1 01:33:50.796 --rc geninfo_all_blocks=1 01:33:50.796 --rc geninfo_unexecuted_blocks=1 01:33:50.796 01:33:50.796 ' 01:33:50.796 05:28:42 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:50.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:50.796 --rc genhtml_branch_coverage=1 01:33:50.796 --rc genhtml_function_coverage=1 01:33:50.796 --rc genhtml_legend=1 01:33:50.796 --rc geninfo_all_blocks=1 01:33:50.796 --rc geninfo_unexecuted_blocks=1 01:33:50.796 01:33:50.796 ' 01:33:50.796 05:28:42 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:50.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:50.796 --rc genhtml_branch_coverage=1 01:33:50.796 --rc genhtml_function_coverage=1 01:33:50.796 --rc genhtml_legend=1 01:33:50.796 --rc geninfo_all_blocks=1 01:33:50.796 --rc geninfo_unexecuted_blocks=1 01:33:50.796 01:33:50.796 ' 01:33:50.796 05:28:42 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:50.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:50.796 --rc genhtml_branch_coverage=1 01:33:50.796 --rc genhtml_function_coverage=1 01:33:50.796 --rc genhtml_legend=1 01:33:50.796 --rc geninfo_all_blocks=1 01:33:50.796 --rc geninfo_unexecuted_blocks=1 01:33:50.796 01:33:50.796 ' 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 01:33:50.796 05:28:42 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 01:33:50.796 05:28:42 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:50.797 05:28:42 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 01:33:50.797 05:28:42 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90127 01:33:50.797 05:28:42 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 01:33:50.797 05:28:42 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90127 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90127 ']' 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:50.797 05:28:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:50.797 [2024-12-09 05:28:42.389326] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:50.797 [2024-12-09 05:28:42.389856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90127 ] 01:33:51.077 [2024-12-09 05:28:42.563090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:33:51.336 [2024-12-09 05:28:42.706472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:51.336 [2024-12-09 05:28:42.706484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:33:52.273 05:28:43 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:52.273 05:28:43 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 01:33:52.273 05:28:43 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 01:33:52.273 05:28:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:52.273 05:28:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:52.273 05:28:43 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 01:33:52.273 05:28:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:52.273 05:28:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:52.273 05:28:43 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 01:33:52.273 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 01:33:52.273 ' 01:33:54.185 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 01:33:54.185 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 01:33:54.185 05:28:45 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 01:33:54.185 05:28:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:54.185 05:28:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:54.185 05:28:45 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 01:33:54.185 05:28:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:54.185 05:28:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:54.185 05:28:45 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 01:33:54.186 ' 01:33:55.119 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 01:33:55.119 05:28:46 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 01:33:55.119 05:28:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:55.119 05:28:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:55.119 05:28:46 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 01:33:55.119 05:28:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:55.119 05:28:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:55.119 05:28:46 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 01:33:55.119 05:28:46 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 01:33:55.686 05:28:47 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 01:33:55.686 05:28:47 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 01:33:55.686 05:28:47 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 01:33:55.686 05:28:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:55.686 05:28:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:55.944 05:28:47 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 01:33:55.944 05:28:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:55.944 05:28:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:55.944 05:28:47 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 01:33:55.944 ' 01:33:56.878 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 01:33:57.135 05:28:48 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 01:33:57.135 05:28:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:57.135 05:28:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:57.135 05:28:48 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 01:33:57.135 05:28:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:57.135 05:28:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:57.135 05:28:48 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 01:33:57.135 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 01:33:57.135 ' 01:33:58.508 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 01:33:58.508 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 01:33:58.766 05:28:50 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:33:58.766 05:28:50 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90127 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90127 ']' 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90127 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90127 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:58.766 killing process with pid 90127 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90127' 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90127 01:33:58.766 05:28:50 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90127 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90127 ']' 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90127 01:34:01.298 05:28:52 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90127 ']' 01:34:01.298 05:28:52 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90127 01:34:01.298 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90127) - No such process 01:34:01.298 Process with pid 90127 is not found 01:34:01.298 05:28:52 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90127 is not found' 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:34:01.298 05:28:52 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:34:01.298 01:34:01.298 real 0m10.648s 01:34:01.298 user 0m21.884s 01:34:01.298 sys 0m1.311s 01:34:01.298 ************************************ 01:34:01.298 END TEST spdkcli_raid 01:34:01.298 ************************************ 01:34:01.298 05:28:52 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:01.298 05:28:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 01:34:01.298 05:28:52 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 01:34:01.298 05:28:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:34:01.298 05:28:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:01.298 05:28:52 -- common/autotest_common.sh@10 -- # set +x 01:34:01.298 ************************************ 01:34:01.298 START TEST blockdev_raid5f 01:34:01.298 ************************************ 01:34:01.298 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 01:34:01.298 * Looking for test storage... 01:34:01.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 01:34:01.298 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:34:01.298 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 01:34:01.299 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:34:01.557 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:34:01.557 05:28:52 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 01:34:01.557 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:34:01.557 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:34:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:01.557 --rc genhtml_branch_coverage=1 01:34:01.557 --rc genhtml_function_coverage=1 01:34:01.557 --rc genhtml_legend=1 01:34:01.557 --rc geninfo_all_blocks=1 01:34:01.557 --rc geninfo_unexecuted_blocks=1 01:34:01.557 01:34:01.557 ' 01:34:01.557 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:34:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:01.557 --rc genhtml_branch_coverage=1 01:34:01.557 --rc genhtml_function_coverage=1 01:34:01.557 --rc genhtml_legend=1 01:34:01.557 --rc geninfo_all_blocks=1 01:34:01.557 --rc geninfo_unexecuted_blocks=1 01:34:01.557 01:34:01.557 ' 01:34:01.557 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:34:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:01.557 --rc genhtml_branch_coverage=1 01:34:01.557 --rc genhtml_function_coverage=1 01:34:01.557 --rc genhtml_legend=1 01:34:01.557 --rc geninfo_all_blocks=1 01:34:01.557 --rc geninfo_unexecuted_blocks=1 01:34:01.557 01:34:01.557 ' 01:34:01.557 05:28:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:34:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:01.557 --rc genhtml_branch_coverage=1 01:34:01.557 --rc genhtml_function_coverage=1 01:34:01.557 --rc genhtml_legend=1 01:34:01.557 --rc geninfo_all_blocks=1 01:34:01.557 --rc geninfo_unexecuted_blocks=1 01:34:01.557 01:34:01.557 ' 01:34:01.557 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:34:01.557 05:28:52 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 01:34:01.557 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 01:34:01.557 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:34:01.557 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 01:34:01.557 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 01:34:01.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90407 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90407 01:34:01.558 05:28:52 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90407 ']' 01:34:01.558 05:28:52 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 01:34:01.558 05:28:52 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:01.558 05:28:52 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:01.558 05:28:52 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:01.558 05:28:52 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:01.558 05:28:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:01.558 [2024-12-09 05:28:53.093306] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:01.558 [2024-12-09 05:28:53.093518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90407 ] 01:34:01.816 [2024-12-09 05:28:53.278318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:01.816 [2024-12-09 05:28:53.412195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:02.751 Malloc0 01:34:02.751 Malloc1 01:34:02.751 Malloc2 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:02.751 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:02.751 05:28:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cfcbe848-8b89-48d6-814d-7401411b9c04"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cfcbe848-8b89-48d6-814d-7401411b9c04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cfcbe848-8b89-48d6-814d-7401411b9c04",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "645e69ef-f0e5-42a6-8664-264afdeccbe0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c70c45da-cbf0-4158-ad89-5b44dd895047",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "eb818886-a6f4-4be9-b34b-b04a0ced305f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 01:34:03.010 05:28:54 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90407 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90407 ']' 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90407 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90407 01:34:03.010 killing process with pid 90407 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90407' 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90407 01:34:03.010 05:28:54 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90407 01:34:05.542 05:28:56 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 01:34:05.542 05:28:56 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 01:34:05.542 05:28:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:34:05.542 05:28:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:05.542 05:28:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:05.542 ************************************ 01:34:05.542 START TEST bdev_hello_world 01:34:05.542 ************************************ 01:34:05.542 05:28:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 01:34:05.542 [2024-12-09 05:28:56.910163] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:05.542 [2024-12-09 05:28:56.910297] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90469 ] 01:34:05.542 [2024-12-09 05:28:57.075331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:05.801 [2024-12-09 05:28:57.200000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:06.366 [2024-12-09 05:28:57.718945] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 01:34:06.366 [2024-12-09 05:28:57.719010] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 01:34:06.366 [2024-12-09 05:28:57.719050] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 01:34:06.366 [2024-12-09 05:28:57.719637] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 01:34:06.366 [2024-12-09 05:28:57.719843] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 01:34:06.366 [2024-12-09 05:28:57.719868] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 01:34:06.366 [2024-12-09 05:28:57.719930] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 01:34:06.366 01:34:06.366 [2024-12-09 05:28:57.719956] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 01:34:07.739 01:34:07.739 real 0m2.131s 01:34:07.739 user 0m1.717s 01:34:07.739 sys 0m0.292s 01:34:07.739 05:28:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:07.739 ************************************ 01:34:07.739 END TEST bdev_hello_world 01:34:07.739 ************************************ 01:34:07.739 05:28:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 01:34:07.739 05:28:59 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 01:34:07.739 05:28:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:34:07.739 05:28:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:07.739 05:28:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:07.739 ************************************ 01:34:07.739 START TEST bdev_bounds 01:34:07.739 ************************************ 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90512 01:34:07.739 Process bdevio pid: 90512 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90512' 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90512 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90512 ']' 01:34:07.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:07.739 05:28:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:34:07.739 [2024-12-09 05:28:59.106607] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:07.739 [2024-12-09 05:28:59.106992] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90512 ] 01:34:07.739 [2024-12-09 05:28:59.282116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:34:08.016 [2024-12-09 05:28:59.414259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:08.016 [2024-12-09 05:28:59.414423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:08.016 [2024-12-09 05:28:59.414450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:34:08.591 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:08.591 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 01:34:08.591 05:29:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 01:34:08.850 I/O targets: 01:34:08.850 raid5f: 131072 blocks of 512 bytes (64 MiB) 01:34:08.850 01:34:08.850 01:34:08.850 CUnit - A unit testing framework for C - Version 2.1-3 01:34:08.850 http://cunit.sourceforge.net/ 01:34:08.850 01:34:08.850 01:34:08.850 Suite: bdevio tests on: raid5f 01:34:08.850 Test: blockdev write read block ...passed 01:34:08.850 Test: blockdev write zeroes read block ...passed 01:34:08.850 Test: blockdev write zeroes read no split ...passed 01:34:08.850 Test: blockdev write zeroes read split ...passed 01:34:09.108 Test: blockdev write zeroes read split partial ...passed 01:34:09.108 Test: blockdev reset ...passed 01:34:09.108 Test: blockdev write read 8 blocks ...passed 01:34:09.108 Test: blockdev write read size > 128k ...passed 01:34:09.108 Test: blockdev write read invalid size ...passed 01:34:09.108 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:34:09.108 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:34:09.109 Test: blockdev write read max offset ...passed 01:34:09.109 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:34:09.109 Test: blockdev writev readv 8 blocks ...passed 01:34:09.109 Test: blockdev writev readv 30 x 1block ...passed 01:34:09.109 Test: blockdev writev readv block ...passed 01:34:09.109 Test: blockdev writev readv size > 128k ...passed 01:34:09.109 Test: blockdev writev readv size > 128k in two iovs ...passed 01:34:09.109 Test: blockdev comparev and writev ...passed 01:34:09.109 Test: blockdev nvme passthru rw ...passed 01:34:09.109 Test: blockdev nvme passthru vendor specific ...passed 01:34:09.109 Test: blockdev nvme admin passthru ...passed 01:34:09.109 Test: blockdev copy ...passed 01:34:09.109 01:34:09.109 Run Summary: Type Total Ran Passed Failed Inactive 01:34:09.109 suites 1 1 n/a 0 0 01:34:09.109 tests 23 23 23 0 0 01:34:09.109 asserts 130 130 130 0 n/a 01:34:09.109 01:34:09.109 Elapsed time = 0.579 seconds 01:34:09.109 0 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90512 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90512 ']' 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90512 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90512 01:34:09.109 killing process with pid 90512 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90512' 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90512 01:34:09.109 05:29:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90512 01:34:10.485 05:29:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 01:34:10.485 01:34:10.485 real 0m2.881s 01:34:10.485 user 0m7.096s 01:34:10.485 sys 0m0.464s 01:34:10.485 05:29:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:10.485 05:29:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 01:34:10.485 ************************************ 01:34:10.485 END TEST bdev_bounds 01:34:10.485 ************************************ 01:34:10.485 05:29:01 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 01:34:10.485 05:29:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:34:10.485 05:29:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:10.485 05:29:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:10.485 ************************************ 01:34:10.485 START TEST bdev_nbd 01:34:10.485 ************************************ 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90576 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90576 /var/tmp/spdk-nbd.sock 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90576 ']' 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:34:10.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:10.485 05:29:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:34:10.485 [2024-12-09 05:29:02.067266] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:10.485 [2024-12-09 05:29:02.067792] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:34:10.744 [2024-12-09 05:29:02.252639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:11.003 [2024-12-09 05:29:02.379201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 01:34:11.569 05:29:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:34:11.827 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:34:11.827 1+0 records in 01:34:11.827 1+0 records out 01:34:11.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314812 s, 13.0 MB/s 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 01:34:11.828 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 01:34:12.104 { 01:34:12.104 "nbd_device": "/dev/nbd0", 01:34:12.104 "bdev_name": "raid5f" 01:34:12.104 } 01:34:12.104 ]' 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 01:34:12.104 { 01:34:12.104 "nbd_device": "/dev/nbd0", 01:34:12.104 "bdev_name": "raid5f" 01:34:12.104 } 01:34:12.104 ]' 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:34:12.104 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:12.361 05:29:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:34:12.620 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 01:34:12.879 /dev/nbd0 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 01:34:12.879 1+0 records in 01:34:12.879 1+0 records out 01:34:12.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311826 s, 13.1 MB/s 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:12.879 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:34:13.139 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:34:13.139 { 01:34:13.139 "nbd_device": "/dev/nbd0", 01:34:13.139 "bdev_name": "raid5f" 01:34:13.139 } 01:34:13.139 ]' 01:34:13.139 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 01:34:13.139 { 01:34:13.139 "nbd_device": "/dev/nbd0", 01:34:13.139 "bdev_name": "raid5f" 01:34:13.139 } 01:34:13.139 ]' 01:34:13.139 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:34:13.139 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 01:34:13.397 256+0 records in 01:34:13.397 256+0 records out 01:34:13.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996404 s, 105 MB/s 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:34:13.397 256+0 records in 01:34:13.397 256+0 records out 01:34:13.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379796 s, 27.6 MB/s 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:34:13.397 05:29:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:13.655 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 01:34:13.913 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 01:34:14.479 malloc_lvol_verify 01:34:14.479 05:29:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 01:34:14.736 8f785048-03cc-4a5d-a0a3-4ea0c162ce8d 01:34:14.736 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 01:34:14.994 a138e61e-03bd-486e-922a-a434feeed8fe 01:34:14.994 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 01:34:15.252 /dev/nbd0 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 01:34:15.252 mke2fs 1.47.0 (5-Feb-2023) 01:34:15.252 Discarding device blocks: 0/4096 done 01:34:15.252 Creating filesystem with 4096 1k blocks and 1024 inodes 01:34:15.252 01:34:15.252 Allocating group tables: 0/1 done 01:34:15.252 Writing inode tables: 0/1 done 01:34:15.252 Creating journal (1024 blocks): done 01:34:15.252 Writing superblocks and filesystem accounting information: 0/1 done 01:34:15.252 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:34:15.252 05:29:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90576 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90576 ']' 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90576 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:15.510 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90576 01:34:15.768 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:15.768 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:15.768 killing process with pid 90576 01:34:15.768 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90576' 01:34:15.768 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90576 01:34:15.768 05:29:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90576 01:34:17.148 05:29:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 01:34:17.148 01:34:17.148 real 0m6.594s 01:34:17.148 user 0m9.550s 01:34:17.148 sys 0m1.315s 01:34:17.148 05:29:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:17.148 05:29:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 01:34:17.148 ************************************ 01:34:17.148 END TEST bdev_nbd 01:34:17.148 ************************************ 01:34:17.148 05:29:08 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 01:34:17.148 05:29:08 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 01:34:17.148 05:29:08 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 01:34:17.148 05:29:08 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 01:34:17.148 05:29:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:34:17.148 05:29:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:17.148 05:29:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:17.148 ************************************ 01:34:17.148 START TEST bdev_fio 01:34:17.148 ************************************ 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 01:34:17.148 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 01:34:17.148 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 01:34:17.149 ************************************ 01:34:17.149 START TEST bdev_fio_rw_verify 01:34:17.149 ************************************ 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:34:17.149 05:29:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 01:34:17.407 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 01:34:17.407 fio-3.35 01:34:17.407 Starting 1 thread 01:34:29.649 01:34:29.649 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90790: Mon Dec 9 05:29:19 2024 01:34:29.649 read: IOPS=8378, BW=32.7MiB/s (34.3MB/s)(327MiB/10001msec) 01:34:29.649 slat (usec): min=21, max=114, avg=29.79, stdev= 6.13 01:34:29.649 clat (usec): min=12, max=511, avg=189.10, stdev=73.76 01:34:29.649 lat (usec): min=40, max=541, avg=218.89, stdev=74.99 01:34:29.649 clat percentiles (usec): 01:34:29.649 | 50.000th=[ 190], 99.000th=[ 363], 99.900th=[ 420], 99.990th=[ 457], 01:34:29.649 | 99.999th=[ 510] 01:34:29.649 write: IOPS=8823, BW=34.5MiB/s (36.1MB/s)(340MiB/9872msec); 0 zone resets 01:34:29.649 slat (usec): min=10, max=248, avg=23.57, stdev= 6.50 01:34:29.649 clat (usec): min=82, max=1446, avg=436.93, stdev=69.37 01:34:29.649 lat (usec): min=104, max=1694, avg=460.51, stdev=71.57 01:34:29.649 clat percentiles (usec): 01:34:29.649 | 50.000th=[ 437], 99.000th=[ 627], 99.900th=[ 717], 99.990th=[ 1074], 01:34:29.649 | 99.999th=[ 1450] 01:34:29.649 bw ( KiB/s): min=29464, max=38960, per=98.74%, avg=34850.90, stdev=2501.51, samples=20 01:34:29.649 iops : min= 7366, max= 9740, avg=8712.70, stdev=625.37, samples=20 01:34:29.649 lat (usec) : 20=0.01%, 50=0.01%, 100=6.55%, 250=30.26%, 500=55.66% 01:34:29.649 lat (usec) : 750=7.50%, 1000=0.01% 01:34:29.649 lat (msec) : 2=0.01% 01:34:29.649 cpu : usr=98.68%, sys=0.49%, ctx=19, majf=0, minf=7317 01:34:29.649 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 01:34:29.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:34:29.649 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:34:29.649 issued rwts: total=83798,87110,0,0 short=0,0,0,0 dropped=0,0,0,0 01:34:29.649 latency : target=0, window=0, percentile=100.00%, depth=8 01:34:29.649 01:34:29.649 Run status group 0 (all jobs): 01:34:29.649 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=327MiB (343MB), run=10001-10001msec 01:34:29.649 WRITE: bw=34.5MiB/s (36.1MB/s), 34.5MiB/s-34.5MiB/s (36.1MB/s-36.1MB/s), io=340MiB (357MB), run=9872-9872msec 01:34:30.215 ----------------------------------------------------- 01:34:30.215 Suppressions used: 01:34:30.215 count bytes template 01:34:30.215 1 7 /usr/src/fio/parse.c 01:34:30.215 701 67296 /usr/src/fio/iolog.c 01:34:30.215 1 8 libtcmalloc_minimal.so 01:34:30.215 1 904 libcrypto.so 01:34:30.215 ----------------------------------------------------- 01:34:30.215 01:34:30.215 01:34:30.215 real 0m13.024s 01:34:30.215 user 0m13.290s 01:34:30.215 sys 0m0.736s 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 01:34:30.215 ************************************ 01:34:30.215 END TEST bdev_fio_rw_verify 01:34:30.215 ************************************ 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cfcbe848-8b89-48d6-814d-7401411b9c04"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cfcbe848-8b89-48d6-814d-7401411b9c04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cfcbe848-8b89-48d6-814d-7401411b9c04",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "645e69ef-f0e5-42a6-8664-264afdeccbe0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c70c45da-cbf0-4158-ad89-5b44dd895047",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "eb818886-a6f4-4be9-b34b-b04a0ced305f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 01:34:30.215 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 01:34:30.474 /home/vagrant/spdk_repo/spdk 01:34:30.474 ************************************ 01:34:30.474 END TEST bdev_fio 01:34:30.474 ************************************ 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 01:34:30.474 01:34:30.474 real 0m13.247s 01:34:30.474 user 0m13.397s 01:34:30.474 sys 0m0.821s 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:30.474 05:29:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 01:34:30.474 05:29:21 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 01:34:30.474 05:29:21 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:34:30.474 05:29:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:34:30.474 05:29:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:30.474 05:29:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:30.474 ************************************ 01:34:30.474 START TEST bdev_verify 01:34:30.474 ************************************ 01:34:30.474 05:29:21 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 01:34:30.474 [2024-12-09 05:29:22.003038] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:30.474 [2024-12-09 05:29:22.003222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90954 ] 01:34:30.733 [2024-12-09 05:29:22.164433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:34:30.733 [2024-12-09 05:29:22.279723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:30.733 [2024-12-09 05:29:22.279737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:31.297 Running I/O for 5 seconds... 01:34:33.622 13322.00 IOPS, 52.04 MiB/s [2024-12-09T05:29:26.174Z] 12654.00 IOPS, 49.43 MiB/s [2024-12-09T05:29:27.108Z] 12426.00 IOPS, 48.54 MiB/s [2024-12-09T05:29:28.040Z] 12342.25 IOPS, 48.21 MiB/s [2024-12-09T05:29:28.040Z] 12272.40 IOPS, 47.94 MiB/s 01:34:36.423 Latency(us) 01:34:36.423 [2024-12-09T05:29:28.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:36.423 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:34:36.423 Verification LBA range: start 0x0 length 0x2000 01:34:36.423 raid5f : 5.01 6183.20 24.15 0.00 0.00 31179.45 310.92 27167.65 01:34:36.423 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:34:36.423 Verification LBA range: start 0x2000 length 0x2000 01:34:36.423 raid5f : 5.02 6059.20 23.67 0.00 0.00 31831.23 318.37 27882.59 01:34:36.423 [2024-12-09T05:29:28.040Z] =================================================================================================================== 01:34:36.423 [2024-12-09T05:29:28.040Z] Total : 12242.39 47.82 0.00 0.00 31502.34 310.92 27882.59 01:34:37.793 01:34:37.793 real 0m7.371s 01:34:37.793 user 0m13.491s 01:34:37.793 sys 0m0.326s 01:34:37.793 ************************************ 01:34:37.793 END TEST bdev_verify 01:34:37.793 ************************************ 01:34:37.793 05:29:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:37.793 05:29:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 01:34:37.793 05:29:29 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:34:37.793 05:29:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 01:34:37.793 05:29:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:37.793 05:29:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:37.793 ************************************ 01:34:37.793 START TEST bdev_verify_big_io 01:34:37.793 ************************************ 01:34:37.793 05:29:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 01:34:38.051 [2024-12-09 05:29:29.436407] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:38.051 [2024-12-09 05:29:29.436607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91046 ] 01:34:38.051 [2024-12-09 05:29:29.619024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:34:38.309 [2024-12-09 05:29:29.735859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:38.309 [2024-12-09 05:29:29.735873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:38.875 Running I/O for 5 seconds... 01:34:41.182 568.00 IOPS, 35.50 MiB/s [2024-12-09T05:29:33.382Z] 727.50 IOPS, 45.47 MiB/s [2024-12-09T05:29:34.754Z] 761.33 IOPS, 47.58 MiB/s [2024-12-09T05:29:35.691Z] 761.50 IOPS, 47.59 MiB/s [2024-12-09T05:29:35.691Z] 761.60 IOPS, 47.60 MiB/s 01:34:44.074 Latency(us) 01:34:44.074 [2024-12-09T05:29:35.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:44.074 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 01:34:44.074 Verification LBA range: start 0x0 length 0x200 01:34:44.074 raid5f : 5.19 391.81 24.49 0.00 0.00 8114848.62 194.56 343170.33 01:34:44.074 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 01:34:44.074 Verification LBA range: start 0x200 length 0x200 01:34:44.074 raid5f : 5.15 394.50 24.66 0.00 0.00 7997473.72 256.93 343170.33 01:34:44.074 [2024-12-09T05:29:35.691Z] =================================================================================================================== 01:34:44.074 [2024-12-09T05:29:35.691Z] Total : 786.31 49.14 0.00 0.00 8056161.17 194.56 343170.33 01:34:45.447 01:34:45.447 real 0m7.623s 01:34:45.447 user 0m13.906s 01:34:45.447 sys 0m0.348s 01:34:45.447 05:29:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:45.447 05:29:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 01:34:45.447 ************************************ 01:34:45.447 END TEST bdev_verify_big_io 01:34:45.447 ************************************ 01:34:45.447 05:29:36 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:34:45.447 05:29:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:34:45.447 05:29:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:45.447 05:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:45.447 ************************************ 01:34:45.447 START TEST bdev_write_zeroes 01:34:45.447 ************************************ 01:34:45.447 05:29:37 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:34:45.704 [2024-12-09 05:29:37.107295] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:45.705 [2024-12-09 05:29:37.107534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91140 ] 01:34:45.705 [2024-12-09 05:29:37.300864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:45.963 [2024-12-09 05:29:37.452430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:46.528 Running I/O for 1 seconds... 01:34:47.486 17559.00 IOPS, 68.59 MiB/s 01:34:47.486 Latency(us) 01:34:47.486 [2024-12-09T05:29:39.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:47.486 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 01:34:47.486 raid5f : 1.01 17556.87 68.58 0.00 0.00 7260.91 1966.08 8996.31 01:34:47.486 [2024-12-09T05:29:39.103Z] =================================================================================================================== 01:34:47.486 [2024-12-09T05:29:39.103Z] Total : 17556.87 68.58 0.00 0.00 7260.91 1966.08 8996.31 01:34:48.860 01:34:48.860 real 0m3.450s 01:34:48.860 user 0m2.979s 01:34:48.860 sys 0m0.333s 01:34:48.860 05:29:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:48.860 05:29:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 01:34:48.860 ************************************ 01:34:48.860 END TEST bdev_write_zeroes 01:34:48.860 ************************************ 01:34:49.118 05:29:40 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:34:49.118 05:29:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:34:49.118 05:29:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:49.118 05:29:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:49.118 ************************************ 01:34:49.118 START TEST bdev_json_nonenclosed 01:34:49.118 ************************************ 01:34:49.118 05:29:40 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:34:49.118 [2024-12-09 05:29:40.622833] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:49.118 [2024-12-09 05:29:40.623294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91198 ] 01:34:49.376 [2024-12-09 05:29:40.807520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:49.376 [2024-12-09 05:29:40.932665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:49.376 [2024-12-09 05:29:40.932800] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 01:34:49.376 [2024-12-09 05:29:40.932835] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:34:49.376 [2024-12-09 05:29:40.932848] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:34:49.635 01:34:49.635 real 0m0.719s 01:34:49.635 user 0m0.465s 01:34:49.635 sys 0m0.147s 01:34:49.635 05:29:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:49.635 ************************************ 01:34:49.635 END TEST bdev_json_nonenclosed 01:34:49.635 ************************************ 01:34:49.635 05:29:41 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 01:34:49.893 05:29:41 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:34:49.893 05:29:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 01:34:49.893 05:29:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:49.893 05:29:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:49.893 ************************************ 01:34:49.893 START TEST bdev_json_nonarray 01:34:49.893 ************************************ 01:34:49.893 05:29:41 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 01:34:49.893 [2024-12-09 05:29:41.400781] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:49.893 [2024-12-09 05:29:41.401002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91225 ] 01:34:50.150 [2024-12-09 05:29:41.590208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:50.150 [2024-12-09 05:29:41.740707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:50.150 [2024-12-09 05:29:41.740836] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 01:34:50.150 [2024-12-09 05:29:41.740866] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:34:50.150 [2024-12-09 05:29:41.740891] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:34:50.714 01:34:50.714 real 0m0.880s 01:34:50.714 user 0m0.605s 01:34:50.714 sys 0m0.167s 01:34:50.714 ************************************ 01:34:50.714 05:29:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:50.714 05:29:42 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 01:34:50.714 END TEST bdev_json_nonarray 01:34:50.714 ************************************ 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 01:34:50.714 05:29:42 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 01:34:50.714 01:34:50.714 real 0m49.467s 01:34:50.714 user 1m7.405s 01:34:50.714 sys 0m5.219s 01:34:50.714 05:29:42 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:50.714 ************************************ 01:34:50.714 END TEST blockdev_raid5f 01:34:50.714 ************************************ 01:34:50.714 05:29:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 01:34:50.714 05:29:42 -- spdk/autotest.sh@194 -- # uname -s 01:34:50.714 05:29:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@260 -- # timing_exit lib 01:34:50.714 05:29:42 -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:50.714 05:29:42 -- common/autotest_common.sh@10 -- # set +x 01:34:50.714 05:29:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:34:50.714 05:29:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:34:50.714 05:29:42 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:34:50.714 05:29:42 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:34:50.714 05:29:42 -- common/autotest_common.sh@726 -- # xtrace_disable 01:34:50.714 05:29:42 -- common/autotest_common.sh@10 -- # set +x 01:34:50.714 05:29:42 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:34:50.714 05:29:42 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:34:50.714 05:29:42 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:34:50.714 05:29:42 -- common/autotest_common.sh@10 -- # set +x 01:34:52.613 INFO: APP EXITING 01:34:52.613 INFO: killing all VMs 01:34:52.613 INFO: killing vhost app 01:34:52.613 INFO: EXIT DONE 01:34:52.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:34:52.871 Waiting for block devices as requested 01:34:52.871 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:34:53.129 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:34:53.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:34:53.960 Cleaning 01:34:53.960 Removing: /var/run/dpdk/spdk0/config 01:34:53.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:34:53.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:34:53.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:34:53.960 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:34:53.960 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:34:53.960 Removing: /var/run/dpdk/spdk0/hugepage_info 01:34:53.960 Removing: /dev/shm/spdk_tgt_trace.pid56724 01:34:53.960 Removing: /var/run/dpdk/spdk0 01:34:53.960 Removing: /var/run/dpdk/spdk_pid56500 01:34:53.960 Removing: /var/run/dpdk/spdk_pid56724 01:34:53.960 Removing: /var/run/dpdk/spdk_pid56953 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57057 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57108 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57236 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57259 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57464 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57569 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57676 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57798 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57917 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57951 01:34:53.960 Removing: /var/run/dpdk/spdk_pid57993 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58069 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58153 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58622 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58692 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58771 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58793 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58941 01:34:53.960 Removing: /var/run/dpdk/spdk_pid58959 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59110 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59126 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59196 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59219 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59278 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59301 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59502 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59544 01:34:53.960 Removing: /var/run/dpdk/spdk_pid59626 01:34:53.960 Removing: /var/run/dpdk/spdk_pid61001 01:34:53.960 Removing: /var/run/dpdk/spdk_pid61218 01:34:53.960 Removing: /var/run/dpdk/spdk_pid61364 01:34:53.960 Removing: /var/run/dpdk/spdk_pid62023 01:34:53.960 Removing: /var/run/dpdk/spdk_pid62235 01:34:53.960 Removing: /var/run/dpdk/spdk_pid62381 01:34:53.960 Removing: /var/run/dpdk/spdk_pid63035 01:34:53.960 Removing: /var/run/dpdk/spdk_pid63376 01:34:53.960 Removing: /var/run/dpdk/spdk_pid63522 01:34:53.960 Removing: /var/run/dpdk/spdk_pid64963 01:34:53.960 Removing: /var/run/dpdk/spdk_pid65227 01:34:53.960 Removing: /var/run/dpdk/spdk_pid65384 01:34:53.960 Removing: /var/run/dpdk/spdk_pid66802 01:34:53.960 Removing: /var/run/dpdk/spdk_pid67068 01:34:53.960 Removing: /var/run/dpdk/spdk_pid67213 01:34:53.960 Removing: /var/run/dpdk/spdk_pid68633 01:34:53.960 Removing: /var/run/dpdk/spdk_pid69094 01:34:53.960 Removing: /var/run/dpdk/spdk_pid69242 01:34:53.960 Removing: /var/run/dpdk/spdk_pid70755 01:34:53.960 Removing: /var/run/dpdk/spdk_pid71027 01:34:53.960 Removing: /var/run/dpdk/spdk_pid71174 01:34:53.960 Removing: /var/run/dpdk/spdk_pid72689 01:34:53.960 Removing: /var/run/dpdk/spdk_pid72965 01:34:53.960 Removing: /var/run/dpdk/spdk_pid73105 01:34:53.960 Removing: /var/run/dpdk/spdk_pid74624 01:34:53.960 Removing: /var/run/dpdk/spdk_pid75122 01:34:53.960 Removing: /var/run/dpdk/spdk_pid75268 01:34:53.960 Removing: /var/run/dpdk/spdk_pid75412 01:34:53.960 Removing: /var/run/dpdk/spdk_pid75859 01:34:53.960 Removing: /var/run/dpdk/spdk_pid76621 01:34:53.960 Removing: /var/run/dpdk/spdk_pid77009 01:34:53.960 Removing: /var/run/dpdk/spdk_pid77709 01:34:53.960 Removing: /var/run/dpdk/spdk_pid78194 01:34:53.960 Removing: /var/run/dpdk/spdk_pid78992 01:34:53.960 Removing: /var/run/dpdk/spdk_pid79419 01:34:53.960 Removing: /var/run/dpdk/spdk_pid81422 01:34:53.960 Removing: /var/run/dpdk/spdk_pid81873 01:34:53.960 Removing: /var/run/dpdk/spdk_pid82332 01:34:53.960 Removing: /var/run/dpdk/spdk_pid84458 01:34:53.960 Removing: /var/run/dpdk/spdk_pid84955 01:34:53.960 Removing: /var/run/dpdk/spdk_pid85464 01:34:53.960 Removing: /var/run/dpdk/spdk_pid86542 01:34:54.219 Removing: /var/run/dpdk/spdk_pid86871 01:34:54.219 Removing: /var/run/dpdk/spdk_pid87833 01:34:54.219 Removing: /var/run/dpdk/spdk_pid88160 01:34:54.219 Removing: /var/run/dpdk/spdk_pid89116 01:34:54.219 Removing: /var/run/dpdk/spdk_pid89450 01:34:54.219 Removing: /var/run/dpdk/spdk_pid90127 01:34:54.219 Removing: /var/run/dpdk/spdk_pid90407 01:34:54.219 Removing: /var/run/dpdk/spdk_pid90469 01:34:54.219 Removing: /var/run/dpdk/spdk_pid90512 01:34:54.219 Removing: /var/run/dpdk/spdk_pid90775 01:34:54.219 Removing: /var/run/dpdk/spdk_pid90954 01:34:54.219 Removing: /var/run/dpdk/spdk_pid91046 01:34:54.219 Removing: /var/run/dpdk/spdk_pid91140 01:34:54.219 Removing: /var/run/dpdk/spdk_pid91198 01:34:54.219 Removing: /var/run/dpdk/spdk_pid91225 01:34:54.219 Clean 01:34:54.219 05:29:45 -- common/autotest_common.sh@1453 -- # return 0 01:34:54.219 05:29:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:34:54.219 05:29:45 -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:54.219 05:29:45 -- common/autotest_common.sh@10 -- # set +x 01:34:54.219 05:29:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:34:54.219 05:29:45 -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:54.219 05:29:45 -- common/autotest_common.sh@10 -- # set +x 01:34:54.219 05:29:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:34:54.219 05:29:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:34:54.219 05:29:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:34:54.219 05:29:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:34:54.219 05:29:45 -- spdk/autotest.sh@398 -- # hostname 01:34:54.219 05:29:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:34:54.478 geninfo: WARNING: invalid characters removed from testname! 01:35:21.016 05:30:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:35:25.199 05:30:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:35:28.486 05:30:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:35:31.019 05:30:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:35:34.313 05:30:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:35:36.843 05:30:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:35:40.130 05:30:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:35:40.130 05:30:31 -- spdk/autorun.sh@1 -- $ timing_finish 01:35:40.130 05:30:31 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:35:40.130 05:30:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:35:40.130 05:30:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:35:40.130 05:30:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:35:40.130 + [[ -n 5266 ]] 01:35:40.130 + sudo kill 5266 01:35:40.139 [Pipeline] } 01:35:40.154 [Pipeline] // timeout 01:35:40.160 [Pipeline] } 01:35:40.200 [Pipeline] // stage 01:35:40.205 [Pipeline] } 01:35:40.222 [Pipeline] // catchError 01:35:40.232 [Pipeline] stage 01:35:40.235 [Pipeline] { (Stop VM) 01:35:40.251 [Pipeline] sh 01:35:40.529 + vagrant halt 01:35:44.708 ==> default: Halting domain... 01:35:49.991 [Pipeline] sh 01:35:50.269 + vagrant destroy -f 01:35:54.453 ==> default: Removing domain... 01:35:54.463 [Pipeline] sh 01:35:54.738 + mv output /var/jenkins/workspace/raid-vg-autotest/output 01:35:54.747 [Pipeline] } 01:35:54.761 [Pipeline] // stage 01:35:54.766 [Pipeline] } 01:35:54.779 [Pipeline] // dir 01:35:54.784 [Pipeline] } 01:35:54.798 [Pipeline] // wrap 01:35:54.804 [Pipeline] } 01:35:54.816 [Pipeline] // catchError 01:35:54.825 [Pipeline] stage 01:35:54.827 [Pipeline] { (Epilogue) 01:35:54.840 [Pipeline] sh 01:35:55.121 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:36:01.694 [Pipeline] catchError 01:36:01.697 [Pipeline] { 01:36:01.711 [Pipeline] sh 01:36:01.991 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:36:03.364 Artifacts sizes are good 01:36:03.373 [Pipeline] } 01:36:03.387 [Pipeline] // catchError 01:36:03.397 [Pipeline] archiveArtifacts 01:36:03.404 Archiving artifacts 01:36:03.510 [Pipeline] cleanWs 01:36:03.524 [WS-CLEANUP] Deleting project workspace... 01:36:03.524 [WS-CLEANUP] Deferred wipeout is used... 01:36:03.531 [WS-CLEANUP] done 01:36:03.532 [Pipeline] } 01:36:03.548 [Pipeline] // stage 01:36:03.555 [Pipeline] } 01:36:03.570 [Pipeline] // node 01:36:03.575 [Pipeline] End of Pipeline 01:36:03.655 Finished: SUCCESS